Test Report: Hyperkit_macOS 19283

                    
                      8d2418a61c606cc3028c5bf9242bf095ec458362:2024-07-17:35383
                    
                

Test fail (11/338)

x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (124.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-572000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-572000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-572000 -v=7 --alsologtostderr: (27.058696736s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-572000 --wait=true -v=7 --alsologtostderr
E0717 10:33:50.055435    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-572000 --wait=true -v=7 --alsologtostderr: exit status 90 (1m34.376859448s)

                                                
                                                
-- stdout --
	* [ha-572000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-572000" primary control-plane node in "ha-572000" cluster
	* Restarting existing hyperkit VM for "ha-572000" ...
	* Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	* Enabled addons: 
	
	* Starting "ha-572000-m02" control-plane node in "ha-572000" cluster
	* Restarting existing hyperkit VM for "ha-572000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:32:37.218202    3508 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:32:37.218482    3508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:32:37.218488    3508 out.go:304] Setting ErrFile to fd 2...
	I0717 10:32:37.218492    3508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:32:37.218678    3508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	I0717 10:32:37.220111    3508 out.go:298] Setting JSON to false
	I0717 10:32:37.243881    3508 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1928,"bootTime":1721235629,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0717 10:32:37.243971    3508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:32:37.265852    3508 out.go:177] * [ha-572000] minikube v1.33.1 on Darwin 14.5
	I0717 10:32:37.307717    3508 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:32:37.307783    3508 notify.go:220] Checking for updates...
	I0717 10:32:37.352082    3508 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:32:37.394723    3508 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 10:32:37.416561    3508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:32:37.437566    3508 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	I0717 10:32:37.458758    3508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:32:37.480259    3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:32:37.480391    3508 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:32:37.481074    3508 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:32:37.481147    3508 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:32:37.491120    3508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51745
	I0717 10:32:37.491492    3508 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:32:37.491919    3508 main.go:141] libmachine: Using API Version  1
	I0717 10:32:37.491928    3508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:32:37.492189    3508 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:32:37.492307    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:37.520549    3508 out.go:177] * Using the hyperkit driver based on existing profile
	I0717 10:32:37.563535    3508 start.go:297] selected driver: hyperkit
	I0717 10:32:37.563555    3508 start.go:901] validating driver "hyperkit" against &{Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:32:37.563770    3508 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:32:37.563903    3508 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:32:37.564063    3508 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19283-1099/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0717 10:32:37.572774    3508 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0717 10:32:37.578697    3508 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:32:37.578722    3508 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0717 10:32:37.582004    3508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:32:37.582058    3508 cni.go:84] Creating CNI manager for ""
	I0717 10:32:37.582066    3508 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 10:32:37.582150    3508 start.go:340] cluster config:
	{Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:32:37.582277    3508 iso.go:125] acquiring lock: {Name:mkf51f842bcc8a77e9c7c50d642c4c76848e96af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:32:37.624644    3508 out.go:177] * Starting "ha-572000" primary control-plane node in "ha-572000" cluster
	I0717 10:32:37.645662    3508 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:32:37.645750    3508 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0717 10:32:37.645778    3508 cache.go:56] Caching tarball of preloaded images
	I0717 10:32:37.645983    3508 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:32:37.646002    3508 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:32:37.646175    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:32:37.647084    3508 start.go:360] acquireMachinesLock for ha-572000: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:32:37.647209    3508 start.go:364] duration metric: took 99.885µs to acquireMachinesLock for "ha-572000"
	I0717 10:32:37.647240    3508 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:32:37.647261    3508 fix.go:54] fixHost starting: 
	I0717 10:32:37.647673    3508 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:32:37.647700    3508 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:32:37.656651    3508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51747
	I0717 10:32:37.657021    3508 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:32:37.657336    3508 main.go:141] libmachine: Using API Version  1
	I0717 10:32:37.657346    3508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:32:37.657590    3508 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:32:37.657719    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:37.657832    3508 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:32:37.657936    3508 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:37.658021    3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 2926
	I0717 10:32:37.658989    3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 2926 missing from process table
	I0717 10:32:37.658986    3508 fix.go:112] recreateIfNeeded on ha-572000: state=Stopped err=<nil>
	I0717 10:32:37.659004    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	W0717 10:32:37.659109    3508 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:32:37.701727    3508 out.go:177] * Restarting existing hyperkit VM for "ha-572000" ...
	I0717 10:32:37.722485    3508 main.go:141] libmachine: (ha-572000) Calling .Start
	I0717 10:32:37.722730    3508 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:37.722799    3508 main.go:141] libmachine: (ha-572000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid
	I0717 10:32:37.724830    3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 2926 missing from process table
	I0717 10:32:37.724872    3508 main.go:141] libmachine: (ha-572000) DBG | pid 2926 is in state "Stopped"
	I0717 10:32:37.724889    3508 main.go:141] libmachine: (ha-572000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid...
	I0717 10:32:37.725226    3508 main.go:141] libmachine: (ha-572000) DBG | Using UUID 5f2666de-0b32-4258-9840-7856c1bd4173
	I0717 10:32:37.837447    3508 main.go:141] libmachine: (ha-572000) DBG | Generated MAC d2:a6:10:ad:80:98
	I0717 10:32:37.837476    3508 main.go:141] libmachine: (ha-572000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:32:37.837593    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f2666de-0b32-4258-9840-7856c1bd4173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bc9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:32:37.837631    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f2666de-0b32-4258-9840-7856c1bd4173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bc9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:32:37.837679    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5f2666de-0b32-4258-9840-7856c1bd4173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/ha-572000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:32:37.837720    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5f2666de-0b32-4258-9840-7856c1bd4173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/ha-572000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:32:37.837736    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:32:37.839166    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Pid is 3521
	I0717 10:32:37.839653    3508 main.go:141] libmachine: (ha-572000) DBG | Attempt 0
	I0717 10:32:37.839674    3508 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:37.839714    3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3521
	I0717 10:32:37.841412    3508 main.go:141] libmachine: (ha-572000) DBG | Searching for d2:a6:10:ad:80:98 in /var/db/dhcpd_leases ...
	I0717 10:32:37.841498    3508 main.go:141] libmachine: (ha-572000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:32:37.841515    3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:37:45:6a:f1:7f ID:1,1e:37:45:6a:f1:7f Lease:0x6698001b}
	I0717 10:32:37.841527    3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x6699517a}
	I0717 10:32:37.841536    3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:d3:62:da:43:cf ID:1,6e:d3:62:da:43:cf Lease:0x669950e4}
	I0717 10:32:37.841559    3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x66994ff6}
	I0717 10:32:37.841570    3508 main.go:141] libmachine: (ha-572000) DBG | Found match: d2:a6:10:ad:80:98
	I0717 10:32:37.841595    3508 main.go:141] libmachine: (ha-572000) DBG | IP: 192.169.0.5
	I0717 10:32:37.841705    3508 main.go:141] libmachine: (ha-572000) Calling .GetConfigRaw
	I0717 10:32:37.842357    3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:32:37.842580    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:32:37.843052    3508 machine.go:94] provisionDockerMachine start ...
	I0717 10:32:37.843065    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:37.843201    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:37.843303    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:37.843420    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:37.843572    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:37.843663    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:37.843791    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:37.844002    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:37.844014    3508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:32:37.847060    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:32:37.898878    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:32:37.899633    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:32:37.899658    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:32:37.899668    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:32:37.899678    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:32:38.277909    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:32:38.277922    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:32:38.392613    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:32:38.392633    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:32:38.392644    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:32:38.392676    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:32:38.393519    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:32:38.393530    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:32:43.648108    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:32:43.648154    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:32:43.648161    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:32:43.672680    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:32:48.904402    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:32:48.904418    3508 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:32:48.904582    3508 buildroot.go:166] provisioning hostname "ha-572000"
	I0717 10:32:48.904593    3508 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:32:48.904692    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:48.904776    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:48.904887    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:48.904976    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:48.905073    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:48.905225    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:48.905383    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:48.905392    3508 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000 && echo "ha-572000" | sudo tee /etc/hostname
	I0717 10:32:48.967564    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000
	
	I0717 10:32:48.967584    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:48.967740    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:48.967836    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:48.967934    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:48.968014    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:48.968132    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:48.968282    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:48.968293    3508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:32:49.026313    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:32:49.026336    3508 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:32:49.026353    3508 buildroot.go:174] setting up certificates
	I0717 10:32:49.026367    3508 provision.go:84] configureAuth start
	I0717 10:32:49.026375    3508 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:32:49.026507    3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:32:49.026613    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:49.026706    3508 provision.go:143] copyHostCerts
	I0717 10:32:49.026741    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:32:49.026811    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:32:49.026819    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:32:49.026972    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:32:49.027200    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:32:49.027231    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:32:49.027236    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:32:49.027325    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:32:49.027487    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:32:49.027519    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:32:49.027524    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:32:49.027590    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:32:49.027748    3508 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000 san=[127.0.0.1 192.169.0.5 ha-572000 localhost minikube]
	I0717 10:32:49.085766    3508 provision.go:177] copyRemoteCerts
	I0717 10:32:49.085812    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:32:49.085827    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:49.086112    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:49.086217    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.086305    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:49.086395    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:32:49.120573    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:32:49.120648    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:32:49.139510    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:32:49.139585    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0717 10:32:49.158247    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:32:49.158317    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:32:49.177520    3508 provision.go:87] duration metric: took 151.137832ms to configureAuth
	I0717 10:32:49.177532    3508 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:32:49.177693    3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:32:49.177706    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:49.177837    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:49.177945    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:49.178031    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.178106    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.178195    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:49.178315    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:49.178439    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:49.178454    3508 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:32:49.231928    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:32:49.231939    3508 buildroot.go:70] root file system type: tmpfs
	I0717 10:32:49.232011    3508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:32:49.232025    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:49.232158    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:49.232247    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.232341    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.232427    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:49.232563    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:49.232710    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:49.232755    3508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:32:49.295280    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:32:49.295308    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:49.295446    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:49.295550    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.295637    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.295723    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:49.295852    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:49.295991    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:49.296003    3508 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:32:50.972633    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:32:50.972648    3508 machine.go:97] duration metric: took 13.129388483s to provisionDockerMachine
	I0717 10:32:50.972660    3508 start.go:293] postStartSetup for "ha-572000" (driver="hyperkit")
	I0717 10:32:50.972668    3508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:32:50.972678    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:50.972893    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:32:50.972908    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:50.973007    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:50.973108    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:50.973193    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:50.973281    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:32:51.011765    3508 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:32:51.016752    3508 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:32:51.016768    3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:32:51.016865    3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:32:51.017004    3508 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:32:51.017011    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:32:51.017179    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:32:51.027779    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:32:51.057568    3508 start.go:296] duration metric: took 84.89741ms for postStartSetup
	I0717 10:32:51.057590    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:51.057768    3508 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:32:51.057780    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:51.057871    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:51.057953    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:51.058038    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:51.058120    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:32:51.090670    3508 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:32:51.090728    3508 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:32:51.124190    3508 fix.go:56] duration metric: took 13.476731728s for fixHost
	I0717 10:32:51.124211    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:51.124344    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:51.124460    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:51.124556    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:51.124646    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:51.124769    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:51.124925    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:51.124933    3508 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 10:32:51.178019    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237571.303332168
	
	I0717 10:32:51.178031    3508 fix.go:216] guest clock: 1721237571.303332168
	I0717 10:32:51.178046    3508 fix.go:229] Guest: 2024-07-17 10:32:51.303332168 -0700 PDT Remote: 2024-07-17 10:32:51.124202 -0700 PDT m=+13.941974821 (delta=179.130168ms)
	I0717 10:32:51.178065    3508 fix.go:200] guest clock delta is within tolerance: 179.130168ms
	I0717 10:32:51.178069    3508 start.go:83] releasing machines lock for "ha-572000", held for 13.530645229s
	I0717 10:32:51.178090    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:51.178220    3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:32:51.178321    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:51.178658    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:51.178764    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:51.178848    3508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:32:51.178881    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:51.178898    3508 ssh_runner.go:195] Run: cat /version.json
	I0717 10:32:51.178911    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:51.178978    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:51.179001    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:51.179061    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:51.179087    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:51.179158    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:51.179178    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:51.179272    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:32:51.179286    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:32:51.214891    3508 ssh_runner.go:195] Run: systemctl --version
	I0717 10:32:51.259994    3508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 10:32:51.264962    3508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:32:51.265002    3508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:32:51.277704    3508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:32:51.277717    3508 start.go:495] detecting cgroup driver to use...
	I0717 10:32:51.277809    3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:32:51.295436    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:32:51.304332    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:32:51.313061    3508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:32:51.313115    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:32:51.321793    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:32:51.330506    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:32:51.339262    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:32:51.347997    3508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:32:51.356934    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:32:51.365798    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:32:51.374520    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:32:51.383330    3508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:32:51.391096    3508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:32:51.398988    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:32:51.492043    3508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:32:51.510670    3508 start.go:495] detecting cgroup driver to use...
	I0717 10:32:51.510748    3508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:32:51.522109    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:32:51.533578    3508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:32:51.547583    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:32:51.558324    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:32:51.568495    3508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:32:51.586295    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:32:51.596174    3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:32:51.611388    3508 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:32:51.614154    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:32:51.621515    3508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:32:51.636315    3508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:32:51.730805    3508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:32:51.833325    3508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:32:51.833396    3508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:32:51.849329    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:32:51.950120    3508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:32:54.304256    3508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.354082061s)
	I0717 10:32:54.304312    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:32:54.314507    3508 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 10:32:54.327160    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:32:54.337277    3508 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:32:54.428967    3508 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:32:54.528124    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:32:54.629785    3508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:32:54.644492    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:32:54.655322    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:32:54.750191    3508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:32:54.814687    3508 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:32:54.814779    3508 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:32:54.819517    3508 start.go:563] Will wait 60s for crictl version
	I0717 10:32:54.819571    3508 ssh_runner.go:195] Run: which crictl
	I0717 10:32:54.823230    3508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:32:54.848640    3508 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:32:54.848713    3508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:32:54.866198    3508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:32:54.925410    3508 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:32:54.925479    3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:32:54.925865    3508 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:32:54.930367    3508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:32:54.939983    3508 kubeadm.go:883] updating cluster {Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 10:32:54.940088    3508 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:32:54.940151    3508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 10:32:54.953243    3508 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0717 10:32:54.953256    3508 docker.go:615] Images already preloaded, skipping extraction
	I0717 10:32:54.953343    3508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 10:32:54.966247    3508 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0717 10:32:54.966267    3508 cache_images.go:84] Images are preloaded, skipping loading
	I0717 10:32:54.966280    3508 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.30.2 docker true true} ...
	I0717 10:32:54.966352    3508 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:32:54.966420    3508 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 10:32:54.987201    3508 cni.go:84] Creating CNI manager for ""
	I0717 10:32:54.987214    3508 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 10:32:54.987234    3508 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 10:32:54.987251    3508 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-572000 NodeName:ha-572000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 10:32:54.987337    3508 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-572000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 10:32:54.987354    3508 kube-vip.go:115] generating kube-vip config ...
	I0717 10:32:54.987400    3508 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 10:32:54.999700    3508 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 10:32:54.999787    3508 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 10:32:54.999838    3508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:32:55.007455    3508 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:32:55.007500    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 10:32:55.014894    3508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0717 10:32:55.028112    3508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:32:55.043389    3508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0717 10:32:55.057830    3508 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0717 10:32:55.071316    3508 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0717 10:32:55.074184    3508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:32:55.083466    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:32:55.183439    3508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:32:55.197167    3508 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.5
	I0717 10:32:55.197180    3508 certs.go:194] generating shared ca certs ...
	I0717 10:32:55.197190    3508 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.197338    3508 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:32:55.197396    3508 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:32:55.197406    3508 certs.go:256] generating profile certs ...
	I0717 10:32:55.197495    3508 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key
	I0717 10:32:55.197518    3508 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7
	I0717 10:32:55.197535    3508 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0717 10:32:55.361955    3508 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7 ...
	I0717 10:32:55.361972    3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7: {Name:mk29664a7594975eea689d2f8ed48fdc71e62969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.362392    3508 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7 ...
	I0717 10:32:55.362403    3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7: {Name:mk57740b7d279f3d01c1e4241799a0ef5b1e79c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.362628    3508 certs.go:381] copying /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7 -> /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt
	I0717 10:32:55.362825    3508 certs.go:385] copying /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7 -> /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key
	I0717 10:32:55.363038    3508 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key
	I0717 10:32:55.363048    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:32:55.363071    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:32:55.363089    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:32:55.363110    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:32:55.363127    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 10:32:55.363144    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 10:32:55.363163    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 10:32:55.363191    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 10:32:55.363269    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:32:55.363307    3508 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:32:55.363315    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:32:55.363344    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:32:55.363373    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:32:55.363400    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:32:55.363474    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:32:55.363509    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:32:55.363530    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:32:55.363548    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:32:55.363978    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:32:55.392580    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:32:55.424360    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:32:55.448923    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:32:55.478217    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 10:32:55.513430    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 10:32:55.570074    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 10:32:55.603052    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 10:32:55.623021    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:32:55.641658    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:32:55.661447    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:32:55.681020    3508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 10:32:55.694280    3508 ssh_runner.go:195] Run: openssl version
	I0717 10:32:55.698669    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:32:55.707011    3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:32:55.710297    3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:32:55.710338    3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:32:55.714541    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:32:55.722665    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:32:55.730951    3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:32:55.734212    3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:32:55.734256    3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:32:55.738428    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:32:55.746621    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:32:55.754849    3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:32:55.758298    3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:32:55.758341    3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:32:55.762565    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:32:55.770829    3508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:32:55.774715    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 10:32:55.780174    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 10:32:55.784640    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 10:32:55.789061    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 10:32:55.793372    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 10:32:55.797672    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 10:32:55.802149    3508 kubeadm.go:392] StartCluster: {Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:32:55.802263    3508 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 10:32:55.813831    3508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 10:32:55.821229    3508 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 10:32:55.821245    3508 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 10:32:55.821296    3508 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 10:32:55.828842    3508 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 10:32:55.829172    3508 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-572000" does not appear in /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:32:55.829253    3508 kubeconfig.go:62] /Users/jenkins/minikube-integration/19283-1099/kubeconfig needs updating (will repair): [kubeconfig missing "ha-572000" cluster setting kubeconfig missing "ha-572000" context setting]
	I0717 10:32:55.829432    3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.829834    3508 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:32:55.830028    3508 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x71e8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 10:32:55.830325    3508 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 10:32:55.830504    3508 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 10:32:55.837614    3508 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0717 10:32:55.837631    3508 kubeadm.go:597] duration metric: took 16.382346ms to restartPrimaryControlPlane
	I0717 10:32:55.837636    3508 kubeadm.go:394] duration metric: took 35.493194ms to StartCluster
	I0717 10:32:55.837647    3508 settings.go:142] acquiring lock: {Name:mkc45f011a907c66e2dbca7dadfff37ab48f7d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.837726    3508 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:32:55.838160    3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.838398    3508 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:32:55.838411    3508 start.go:241] waiting for startup goroutines ...
	I0717 10:32:55.838425    3508 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 10:32:55.838529    3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:32:55.881476    3508 out.go:177] * Enabled addons: 
	I0717 10:32:55.902556    3508 addons.go:510] duration metric: took 64.135812ms for enable addons: enabled=[]
	I0717 10:32:55.902605    3508 start.go:246] waiting for cluster config update ...
	I0717 10:32:55.902617    3508 start.go:255] writing updated cluster config ...
	I0717 10:32:55.924553    3508 out.go:177] 
	I0717 10:32:55.945720    3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:32:55.945818    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:32:55.967938    3508 out.go:177] * Starting "ha-572000-m02" control-plane node in "ha-572000" cluster
	I0717 10:32:56.010383    3508 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:32:56.010417    3508 cache.go:56] Caching tarball of preloaded images
	I0717 10:32:56.010593    3508 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:32:56.010613    3508 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:32:56.010735    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:32:56.011714    3508 start.go:360] acquireMachinesLock for ha-572000-m02: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:32:56.011815    3508 start.go:364] duration metric: took 76.983µs to acquireMachinesLock for "ha-572000-m02"
	I0717 10:32:56.011840    3508 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:32:56.011849    3508 fix.go:54] fixHost starting: m02
	I0717 10:32:56.012268    3508 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:32:56.012290    3508 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:32:56.021749    3508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51769
	I0717 10:32:56.022134    3508 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:32:56.022452    3508 main.go:141] libmachine: Using API Version  1
	I0717 10:32:56.022466    3508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:32:56.022707    3508 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:32:56.022831    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:32:56.022920    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetState
	I0717 10:32:56.023010    3508 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:56.023088    3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3461
	I0717 10:32:56.024015    3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3461 missing from process table
	I0717 10:32:56.024031    3508 fix.go:112] recreateIfNeeded on ha-572000-m02: state=Stopped err=<nil>
	I0717 10:32:56.024040    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	W0717 10:32:56.024134    3508 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:32:56.066377    3508 out.go:177] * Restarting existing hyperkit VM for "ha-572000-m02" ...
	I0717 10:32:56.087674    3508 main.go:141] libmachine: (ha-572000-m02) Calling .Start
	I0717 10:32:56.087950    3508 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:56.087999    3508 main.go:141] libmachine: (ha-572000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid
	I0717 10:32:56.089806    3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3461 missing from process table
	I0717 10:32:56.089821    3508 main.go:141] libmachine: (ha-572000-m02) DBG | pid 3461 is in state "Stopped"
	I0717 10:32:56.089839    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid...
	I0717 10:32:56.090122    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Using UUID b5da5881-83da-4916-aec8-9a96c30c8c05
	I0717 10:32:56.117133    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Generated MAC 2:60:33:0:68:8b
	I0717 10:32:56.117180    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:32:56.117265    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b5da5881-83da-4916-aec8-9a96c30c8c05", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:32:56.117293    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b5da5881-83da-4916-aec8-9a96c30c8c05", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:32:56.117357    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b5da5881-83da-4916-aec8-9a96c30c8c05", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/ha-572000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machine
s/ha-572000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:32:56.117402    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b5da5881-83da-4916-aec8-9a96c30c8c05 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/ha-572000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:32:56.117418    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:32:56.118762    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Pid is 3526
	I0717 10:32:56.119239    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Attempt 0
	I0717 10:32:56.119252    3508 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:56.119326    3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3526
	I0717 10:32:56.121158    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Searching for 2:60:33:0:68:8b in /var/db/dhcpd_leases ...
	I0717 10:32:56.121244    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:32:56.121275    3508 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669951be}
	I0717 10:32:56.121292    3508 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:37:45:6a:f1:7f ID:1,1e:37:45:6a:f1:7f Lease:0x6698001b}
	I0717 10:32:56.121303    3508 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x6699517a}
	I0717 10:32:56.121311    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Found match: 2:60:33:0:68:8b
	I0717 10:32:56.121322    3508 main.go:141] libmachine: (ha-572000-m02) DBG | IP: 192.169.0.6
	I0717 10:32:56.121381    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetConfigRaw
	I0717 10:32:56.122119    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:32:56.122366    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:32:56.122967    3508 machine.go:94] provisionDockerMachine start ...
	I0717 10:32:56.122978    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:32:56.123097    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:32:56.123191    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:32:56.123279    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:32:56.123377    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:32:56.123509    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:32:56.123686    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:56.123860    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:32:56.123869    3508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:32:56.127424    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:32:56.136905    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:32:56.138099    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:32:56.138119    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:32:56.138127    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:32:56.138133    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:32:56.517427    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:32:56.517452    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:32:56.632129    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:32:56.632146    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:32:56.632154    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:32:56.632161    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:32:56.632978    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:32:56.632987    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:33:01.882277    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:33:01.882372    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:33:01.882381    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:33:01.905950    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:33:07.183510    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:33:07.183524    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:33:07.183678    3508 buildroot.go:166] provisioning hostname "ha-572000-m02"
	I0717 10:33:07.183687    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:33:07.183789    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.183881    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.183992    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.184084    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.184179    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.184316    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:07.184458    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:07.184466    3508 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000-m02 && echo "ha-572000-m02" | sudo tee /etc/hostname
	I0717 10:33:07.250039    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000-m02
	
	I0717 10:33:07.250065    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.250206    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.250287    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.250390    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.250483    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.250636    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:07.250802    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:07.250815    3508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:33:07.311401    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:33:07.311420    3508 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:33:07.311431    3508 buildroot.go:174] setting up certificates
	I0717 10:33:07.311441    3508 provision.go:84] configureAuth start
	I0717 10:33:07.311448    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:33:07.311593    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:33:07.311680    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.311768    3508 provision.go:143] copyHostCerts
	I0717 10:33:07.311797    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:33:07.311852    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:33:07.311858    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:33:07.312271    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:33:07.312505    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:33:07.312536    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:33:07.312541    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:33:07.312619    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:33:07.312779    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:33:07.312811    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:33:07.312816    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:33:07.312912    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:33:07.313069    3508 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000-m02 san=[127.0.0.1 192.169.0.6 ha-572000-m02 localhost minikube]
	I0717 10:33:07.375154    3508 provision.go:177] copyRemoteCerts
	I0717 10:33:07.375212    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:33:07.375227    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.375382    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.375473    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.375558    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.375656    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:33:07.409433    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:33:07.409505    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:33:07.429479    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:33:07.429539    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 10:33:07.451163    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:33:07.451231    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:33:07.471509    3508 provision.go:87] duration metric: took 160.057268ms to configureAuth
	I0717 10:33:07.471523    3508 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:33:07.471702    3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:33:07.471715    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:07.471860    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.471964    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.472045    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.472140    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.472216    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.472319    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:07.472438    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:07.472446    3508 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:33:07.526742    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:33:07.526766    3508 buildroot.go:70] root file system type: tmpfs
	I0717 10:33:07.526848    3508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:33:07.526860    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.526992    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.527094    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.527175    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.527248    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.527375    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:07.527510    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:07.527555    3508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:33:07.594480    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:33:07.594502    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.594640    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.594720    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.594808    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.594894    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.595019    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:07.595164    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:07.595178    3508 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:33:09.291500    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:33:09.291515    3508 machine.go:97] duration metric: took 13.164785942s to provisionDockerMachine
	I0717 10:33:09.291524    3508 start.go:293] postStartSetup for "ha-572000-m02" (driver="hyperkit")
	I0717 10:33:09.291531    3508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:33:09.291546    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.291729    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:33:09.291743    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:09.291855    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:09.291956    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.292049    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:09.292155    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:33:09.335381    3508 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:33:09.338532    3508 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:33:09.338541    3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:33:09.338631    3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:33:09.338771    3508 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:33:09.338778    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:33:09.338937    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:33:09.346285    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:33:09.366379    3508 start.go:296] duration metric: took 74.672934ms for postStartSetup
	I0717 10:33:09.366399    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.366579    3508 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:33:09.366592    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:09.366681    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:09.366764    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.366841    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:09.366910    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:33:09.399615    3508 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:33:09.399679    3508 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:33:09.453746    3508 fix.go:56] duration metric: took 13.437754461s for fixHost
	I0717 10:33:09.453771    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:09.453917    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:09.454023    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.454133    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.454219    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:09.454344    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:09.454500    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:09.454509    3508 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 10:33:09.507516    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237589.628548940
	
	I0717 10:33:09.507529    3508 fix.go:216] guest clock: 1721237589.628548940
	I0717 10:33:09.507535    3508 fix.go:229] Guest: 2024-07-17 10:33:09.62854894 -0700 PDT Remote: 2024-07-17 10:33:09.453761 -0700 PDT m=+32.267325038 (delta=174.78794ms)
	I0717 10:33:09.507545    3508 fix.go:200] guest clock delta is within tolerance: 174.78794ms
	I0717 10:33:09.507551    3508 start.go:83] releasing machines lock for "ha-572000-m02", held for 13.491465012s
	I0717 10:33:09.507572    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.507699    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:33:09.532514    3508 out.go:177] * Found network options:
	I0717 10:33:09.552891    3508 out.go:177]   - NO_PROXY=192.169.0.5
	W0717 10:33:09.574387    3508 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:33:09.574424    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.575230    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.575434    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.575533    3508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:33:09.575579    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	W0717 10:33:09.575674    3508 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:33:09.575742    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:09.575769    3508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:33:09.575787    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:09.575982    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.576003    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:09.576234    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.576305    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:09.576479    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:09.576483    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:33:09.576596    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	W0717 10:33:09.607732    3508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:33:09.607792    3508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:33:09.656923    3508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:33:09.656940    3508 start.go:495] detecting cgroup driver to use...
	I0717 10:33:09.657029    3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:33:09.673202    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:33:09.682149    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:33:09.691293    3508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:33:09.691348    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:33:09.700430    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:33:09.709231    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:33:09.718168    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:33:09.727036    3508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:33:09.736298    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:33:09.745642    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:33:09.754690    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:33:09.763621    3508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:33:09.771717    3508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:33:09.779861    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:33:09.883183    3508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:33:09.901989    3508 start.go:495] detecting cgroup driver to use...
	I0717 10:33:09.902056    3508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:33:09.919371    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:33:09.932597    3508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:33:09.953462    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:33:09.964583    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:33:09.975437    3508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:33:09.995754    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:33:10.006015    3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:33:10.020825    3508 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:33:10.023692    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:33:10.030648    3508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:33:10.044228    3508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:33:10.141170    3508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:33:10.249186    3508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:33:10.249214    3508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:33:10.263041    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:33:10.359716    3508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:34:11.416224    3508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.021941021s)
	I0717 10:34:11.416300    3508 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0717 10:34:11.450835    3508 out.go:177] 
	W0717 10:34:11.471671    3508 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 17:33:08 ha-572000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.044876852Z" level=info msg="Starting up"
	Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.045449556Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.049003475Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=496
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.064081003Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079222179Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079310364Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079376764Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079411371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079557600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079609621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079752864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079797312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079887739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079928799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.080046807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.080239575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.081923027Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.081977822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082123136Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082166838Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082275842Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082332754Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084273060Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084339651Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084378389Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084411359Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084442922Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084509418Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084664339Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084738339Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084774254Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084804627Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084874943Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084911894Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084942267Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084972768Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085003365Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085032856Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085062302Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085090775Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085129743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085161980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085192066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085224112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085253798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085286177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085315810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085345112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085374976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085410351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085440979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085471089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085500214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085532017Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085571085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085603089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085635203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085683933Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085717630Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085747936Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085777505Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085805608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085834007Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085861655Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086142807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086206245Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086259095Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086322237Z" level=info msg="containerd successfully booted in 0.022994s"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.065923436Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.108166477Z" level=info msg="Loading containers: start."
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.277192209Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.336888641Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.380874805Z" level=info msg="Loading containers: done."
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.387565385Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.387757279Z" level=info msg="Daemon has completed initialization"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.411794010Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.411982753Z" level=info msg="API listen on [::]:2376"
	Jul 17 17:33:09 ha-572000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.490943827Z" level=info msg="Processing signal 'terminated'"
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.491923813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.491997518Z" level=info msg="Daemon shutdown complete"
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.492029261Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.492040420Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 17:33:10 ha-572000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 17:33:11 ha-572000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 17:33:11 ha-572000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 17:33:11 ha-572000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 17:33:11 ha-572000-m02 dockerd[1164]: time="2024-07-17T17:33:11.528450348Z" level=info msg="Starting up"
	Jul 17 17:34:11 ha-572000-m02 dockerd[1164]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 17:34:11 ha-572000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 17:34:11 ha-572000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 17:34:11 ha-572000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 17:33:08 ha-572000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.044876852Z" level=info msg="Starting up"
	Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.045449556Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.049003475Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=496
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.064081003Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079222179Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079310364Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079376764Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079411371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079557600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079609621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079752864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079797312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079887739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079928799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.080046807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.080239575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.081923027Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.081977822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082123136Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082166838Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082275842Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082332754Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084273060Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084339651Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084378389Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084411359Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084442922Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084509418Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084664339Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084738339Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084774254Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084804627Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084874943Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084911894Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084942267Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084972768Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085003365Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085032856Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085062302Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085090775Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085129743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085161980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085192066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085224112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085253798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085286177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085315810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085345112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085374976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085410351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085440979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085471089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085500214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085532017Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085571085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085603089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085635203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085683933Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085717630Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085747936Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085777505Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085805608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085834007Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085861655Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086142807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086206245Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086259095Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086322237Z" level=info msg="containerd successfully booted in 0.022994s"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.065923436Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.108166477Z" level=info msg="Loading containers: start."
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.277192209Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.336888641Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.380874805Z" level=info msg="Loading containers: done."
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.387565385Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.387757279Z" level=info msg="Daemon has completed initialization"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.411794010Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.411982753Z" level=info msg="API listen on [::]:2376"
	Jul 17 17:33:09 ha-572000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.490943827Z" level=info msg="Processing signal 'terminated'"
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.491923813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.491997518Z" level=info msg="Daemon shutdown complete"
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.492029261Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.492040420Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 17:33:10 ha-572000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 17:33:11 ha-572000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 17:33:11 ha-572000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 17:33:11 ha-572000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 17:33:11 ha-572000-m02 dockerd[1164]: time="2024-07-17T17:33:11.528450348Z" level=info msg="Starting up"
	Jul 17 17:34:11 ha-572000-m02 dockerd[1164]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 17:34:11 ha-572000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 17:34:11 ha-572000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 17:34:11 ha-572000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0717 10:34:11.471802    3508 out.go:239] * 
	* 
	W0717 10:34:11.473037    3508 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:34:11.536857    3508 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p ha-572000 -v=7 --alsologtostderr" : exit status 90
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-572000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-572000 -n ha-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-572000 -n ha-572000: exit status 2 (163.820356ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-572000 logs -n 25: (2.233321155s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-572000 cp ha-572000-m03:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m02:/home/docker/cp-test_ha-572000-m03_ha-572000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m02 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m03_ha-572000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m03:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04:/home/docker/cp-test_ha-572000-m03_ha-572000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m04 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m03_ha-572000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-572000 cp testdata/cp-test.txt                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3497274976/001/cp-test_ha-572000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000:/home/docker/cp-test_ha-572000-m04_ha-572000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000 sudo cat                                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m02:/home/docker/cp-test_ha-572000-m04_ha-572000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m02 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m03:/home/docker/cp-test_ha-572000-m04_ha-572000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m03 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-572000 node stop m02 -v=7                                                                                                 | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-572000 node start m02 -v=7                                                                                                | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:32 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-572000 -v=7                                                                                                       | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-572000 -v=7                                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT | 17 Jul 24 10:32 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-572000 --wait=true -v=7                                                                                                | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-572000                                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:34 PDT |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 10:32:37
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 10:32:37.218202    3508 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:32:37.218482    3508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:32:37.218488    3508 out.go:304] Setting ErrFile to fd 2...
	I0717 10:32:37.218492    3508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:32:37.218678    3508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	I0717 10:32:37.220111    3508 out.go:298] Setting JSON to false
	I0717 10:32:37.243881    3508 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1928,"bootTime":1721235629,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0717 10:32:37.243971    3508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:32:37.265852    3508 out.go:177] * [ha-572000] minikube v1.33.1 on Darwin 14.5
	I0717 10:32:37.307717    3508 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:32:37.307783    3508 notify.go:220] Checking for updates...
	I0717 10:32:37.352082    3508 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:32:37.394723    3508 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 10:32:37.416561    3508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:32:37.437566    3508 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	I0717 10:32:37.458758    3508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:32:37.480259    3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:32:37.480391    3508 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:32:37.481074    3508 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:32:37.481147    3508 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:32:37.491120    3508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51745
	I0717 10:32:37.491492    3508 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:32:37.491919    3508 main.go:141] libmachine: Using API Version  1
	I0717 10:32:37.491928    3508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:32:37.492189    3508 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:32:37.492307    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:37.520549    3508 out.go:177] * Using the hyperkit driver based on existing profile
	I0717 10:32:37.563535    3508 start.go:297] selected driver: hyperkit
	I0717 10:32:37.563555    3508 start.go:901] validating driver "hyperkit" against &{Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:32:37.563770    3508 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:32:37.563903    3508 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:32:37.564063    3508 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19283-1099/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0717 10:32:37.572774    3508 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0717 10:32:37.578697    3508 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:32:37.578722    3508 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0717 10:32:37.582004    3508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:32:37.582058    3508 cni.go:84] Creating CNI manager for ""
	I0717 10:32:37.582066    3508 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 10:32:37.582150    3508 start.go:340] cluster config:
	{Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:32:37.582277    3508 iso.go:125] acquiring lock: {Name:mkf51f842bcc8a77e9c7c50d642c4c76848e96af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:32:37.624644    3508 out.go:177] * Starting "ha-572000" primary control-plane node in "ha-572000" cluster
	I0717 10:32:37.645662    3508 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:32:37.645750    3508 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0717 10:32:37.645778    3508 cache.go:56] Caching tarball of preloaded images
	I0717 10:32:37.645983    3508 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:32:37.646002    3508 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:32:37.646175    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:32:37.647084    3508 start.go:360] acquireMachinesLock for ha-572000: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:32:37.647209    3508 start.go:364] duration metric: took 99.885µs to acquireMachinesLock for "ha-572000"
	I0717 10:32:37.647240    3508 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:32:37.647261    3508 fix.go:54] fixHost starting: 
	I0717 10:32:37.647673    3508 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:32:37.647700    3508 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:32:37.656651    3508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51747
	I0717 10:32:37.657021    3508 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:32:37.657336    3508 main.go:141] libmachine: Using API Version  1
	I0717 10:32:37.657346    3508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:32:37.657590    3508 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:32:37.657719    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:37.657832    3508 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:32:37.657936    3508 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:37.658021    3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 2926
	I0717 10:32:37.658989    3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 2926 missing from process table
	I0717 10:32:37.658986    3508 fix.go:112] recreateIfNeeded on ha-572000: state=Stopped err=<nil>
	I0717 10:32:37.659004    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	W0717 10:32:37.659109    3508 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:32:37.701727    3508 out.go:177] * Restarting existing hyperkit VM for "ha-572000" ...
	I0717 10:32:37.722485    3508 main.go:141] libmachine: (ha-572000) Calling .Start
	I0717 10:32:37.722730    3508 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:37.722799    3508 main.go:141] libmachine: (ha-572000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid
	I0717 10:32:37.724830    3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 2926 missing from process table
	I0717 10:32:37.724872    3508 main.go:141] libmachine: (ha-572000) DBG | pid 2926 is in state "Stopped"
	I0717 10:32:37.724889    3508 main.go:141] libmachine: (ha-572000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid...
	I0717 10:32:37.725226    3508 main.go:141] libmachine: (ha-572000) DBG | Using UUID 5f2666de-0b32-4258-9840-7856c1bd4173
	I0717 10:32:37.837447    3508 main.go:141] libmachine: (ha-572000) DBG | Generated MAC d2:a6:10:ad:80:98
	I0717 10:32:37.837476    3508 main.go:141] libmachine: (ha-572000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:32:37.837593    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f2666de-0b32-4258-9840-7856c1bd4173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bc9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:32:37.837631    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f2666de-0b32-4258-9840-7856c1bd4173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bc9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:32:37.837679    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5f2666de-0b32-4258-9840-7856c1bd4173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/ha-572000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:32:37.837720    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5f2666de-0b32-4258-9840-7856c1bd4173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/ha-572000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:32:37.837736    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:32:37.839166    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Pid is 3521
	I0717 10:32:37.839653    3508 main.go:141] libmachine: (ha-572000) DBG | Attempt 0
	I0717 10:32:37.839674    3508 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:37.839714    3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3521
	I0717 10:32:37.841412    3508 main.go:141] libmachine: (ha-572000) DBG | Searching for d2:a6:10:ad:80:98 in /var/db/dhcpd_leases ...
	I0717 10:32:37.841498    3508 main.go:141] libmachine: (ha-572000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:32:37.841515    3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:37:45:6a:f1:7f ID:1,1e:37:45:6a:f1:7f Lease:0x6698001b}
	I0717 10:32:37.841527    3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x6699517a}
	I0717 10:32:37.841536    3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:d3:62:da:43:cf ID:1,6e:d3:62:da:43:cf Lease:0x669950e4}
	I0717 10:32:37.841559    3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x66994ff6}
	I0717 10:32:37.841570    3508 main.go:141] libmachine: (ha-572000) DBG | Found match: d2:a6:10:ad:80:98
	I0717 10:32:37.841595    3508 main.go:141] libmachine: (ha-572000) DBG | IP: 192.169.0.5
	I0717 10:32:37.841705    3508 main.go:141] libmachine: (ha-572000) Calling .GetConfigRaw
	I0717 10:32:37.842357    3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:32:37.842580    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:32:37.843052    3508 machine.go:94] provisionDockerMachine start ...
	I0717 10:32:37.843065    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:37.843201    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:37.843303    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:37.843420    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:37.843572    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:37.843663    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:37.843791    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:37.844002    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:37.844014    3508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:32:37.847060    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:32:37.898878    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:32:37.899633    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:32:37.899658    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:32:37.899668    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:32:37.899678    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:32:38.277909    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:32:38.277922    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:32:38.392613    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:32:38.392633    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:32:38.392644    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:32:38.392676    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:32:38.393519    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:32:38.393530    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:32:43.648108    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:32:43.648154    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:32:43.648161    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:32:43.672680    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:32:48.904402    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:32:48.904418    3508 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:32:48.904582    3508 buildroot.go:166] provisioning hostname "ha-572000"
	I0717 10:32:48.904593    3508 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:32:48.904692    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:48.904776    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:48.904887    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:48.904976    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:48.905073    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:48.905225    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:48.905383    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:48.905392    3508 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000 && echo "ha-572000" | sudo tee /etc/hostname
	I0717 10:32:48.967564    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000
	
	I0717 10:32:48.967584    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:48.967740    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:48.967836    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:48.967934    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:48.968014    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:48.968132    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:48.968282    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:48.968293    3508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:32:49.026313    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:32:49.026336    3508 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:32:49.026353    3508 buildroot.go:174] setting up certificates
	I0717 10:32:49.026367    3508 provision.go:84] configureAuth start
	I0717 10:32:49.026375    3508 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:32:49.026507    3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:32:49.026613    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:49.026706    3508 provision.go:143] copyHostCerts
	I0717 10:32:49.026741    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:32:49.026811    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:32:49.026819    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:32:49.026972    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:32:49.027200    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:32:49.027231    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:32:49.027236    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:32:49.027325    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:32:49.027487    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:32:49.027519    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:32:49.027524    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:32:49.027590    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:32:49.027748    3508 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000 san=[127.0.0.1 192.169.0.5 ha-572000 localhost minikube]
	I0717 10:32:49.085766    3508 provision.go:177] copyRemoteCerts
	I0717 10:32:49.085812    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:32:49.085827    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:49.086112    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:49.086217    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.086305    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:49.086395    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:32:49.120573    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:32:49.120648    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:32:49.139510    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:32:49.139585    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0717 10:32:49.158247    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:32:49.158317    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:32:49.177520    3508 provision.go:87] duration metric: took 151.137832ms to configureAuth
	I0717 10:32:49.177532    3508 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:32:49.177693    3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:32:49.177706    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:49.177837    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:49.177945    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:49.178031    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.178106    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.178195    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:49.178315    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:49.178439    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:49.178454    3508 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:32:49.231928    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:32:49.231939    3508 buildroot.go:70] root file system type: tmpfs
	I0717 10:32:49.232011    3508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:32:49.232025    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:49.232158    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:49.232247    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.232341    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.232427    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:49.232563    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:49.232710    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:49.232755    3508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:32:49.295280    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:32:49.295308    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:49.295446    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:49.295550    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.295637    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.295723    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:49.295852    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:49.295991    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:49.296003    3508 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:32:50.972633    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:32:50.972648    3508 machine.go:97] duration metric: took 13.129388483s to provisionDockerMachine
	I0717 10:32:50.972660    3508 start.go:293] postStartSetup for "ha-572000" (driver="hyperkit")
	I0717 10:32:50.972668    3508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:32:50.972678    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:50.972893    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:32:50.972908    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:50.973007    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:50.973108    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:50.973193    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:50.973281    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:32:51.011765    3508 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:32:51.016752    3508 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:32:51.016768    3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:32:51.016865    3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:32:51.017004    3508 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:32:51.017011    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:32:51.017179    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:32:51.027779    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:32:51.057568    3508 start.go:296] duration metric: took 84.89741ms for postStartSetup
	I0717 10:32:51.057590    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:51.057768    3508 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:32:51.057780    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:51.057871    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:51.057953    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:51.058038    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:51.058120    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:32:51.090670    3508 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:32:51.090728    3508 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:32:51.124190    3508 fix.go:56] duration metric: took 13.476731728s for fixHost
	I0717 10:32:51.124211    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:51.124344    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:51.124460    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:51.124556    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:51.124646    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:51.124769    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:51.124925    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:51.124933    3508 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:32:51.178019    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237571.303332168
	
	I0717 10:32:51.178031    3508 fix.go:216] guest clock: 1721237571.303332168
	I0717 10:32:51.178046    3508 fix.go:229] Guest: 2024-07-17 10:32:51.303332168 -0700 PDT Remote: 2024-07-17 10:32:51.124202 -0700 PDT m=+13.941974821 (delta=179.130168ms)
	I0717 10:32:51.178065    3508 fix.go:200] guest clock delta is within tolerance: 179.130168ms
	I0717 10:32:51.178069    3508 start.go:83] releasing machines lock for "ha-572000", held for 13.530645229s
	I0717 10:32:51.178090    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:51.178220    3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:32:51.178321    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:51.178658    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:51.178764    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:51.178848    3508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:32:51.178881    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:51.178898    3508 ssh_runner.go:195] Run: cat /version.json
	I0717 10:32:51.178911    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:51.178978    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:51.179001    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:51.179061    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:51.179087    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:51.179158    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:51.179178    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:51.179272    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:32:51.179286    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:32:51.214891    3508 ssh_runner.go:195] Run: systemctl --version
	I0717 10:32:51.259994    3508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 10:32:51.264962    3508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:32:51.265002    3508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:32:51.277704    3508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:32:51.277717    3508 start.go:495] detecting cgroup driver to use...
	I0717 10:32:51.277809    3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:32:51.295436    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:32:51.304332    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:32:51.313061    3508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:32:51.313115    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:32:51.321793    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:32:51.330506    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:32:51.339262    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:32:51.347997    3508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:32:51.356934    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:32:51.365798    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:32:51.374520    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:32:51.383330    3508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:32:51.391096    3508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:32:51.398988    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:32:51.492043    3508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:32:51.510670    3508 start.go:495] detecting cgroup driver to use...
	I0717 10:32:51.510748    3508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:32:51.522109    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:32:51.533578    3508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:32:51.547583    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:32:51.558324    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:32:51.568495    3508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:32:51.586295    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:32:51.596174    3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:32:51.611388    3508 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:32:51.614154    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:32:51.621515    3508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:32:51.636315    3508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:32:51.730805    3508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:32:51.833325    3508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:32:51.833396    3508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:32:51.849329    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:32:51.950120    3508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:32:54.304256    3508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.354082061s)
	I0717 10:32:54.304312    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:32:54.314507    3508 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 10:32:54.327160    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:32:54.337277    3508 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:32:54.428967    3508 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:32:54.528124    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:32:54.629785    3508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:32:54.644492    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:32:54.655322    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:32:54.750191    3508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:32:54.814687    3508 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:32:54.814779    3508 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:32:54.819517    3508 start.go:563] Will wait 60s for crictl version
	I0717 10:32:54.819571    3508 ssh_runner.go:195] Run: which crictl
	I0717 10:32:54.823230    3508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:32:54.848640    3508 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:32:54.848713    3508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:32:54.866198    3508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:32:54.925410    3508 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:32:54.925479    3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:32:54.925865    3508 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:32:54.930367    3508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:32:54.939983    3508 kubeadm.go:883] updating cluster {Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 10:32:54.940088    3508 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:32:54.940151    3508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 10:32:54.953243    3508 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0717 10:32:54.953256    3508 docker.go:615] Images already preloaded, skipping extraction
	I0717 10:32:54.953343    3508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 10:32:54.966247    3508 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0717 10:32:54.966267    3508 cache_images.go:84] Images are preloaded, skipping loading
	I0717 10:32:54.966280    3508 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.30.2 docker true true} ...
	I0717 10:32:54.966352    3508 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:32:54.966420    3508 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 10:32:54.987201    3508 cni.go:84] Creating CNI manager for ""
	I0717 10:32:54.987214    3508 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 10:32:54.987234    3508 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 10:32:54.987251    3508 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-572000 NodeName:ha-572000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 10:32:54.987337    3508 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-572000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 10:32:54.987354    3508 kube-vip.go:115] generating kube-vip config ...
	I0717 10:32:54.987400    3508 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 10:32:54.999700    3508 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 10:32:54.999787    3508 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 10:32:54.999838    3508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:32:55.007455    3508 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:32:55.007500    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 10:32:55.014894    3508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0717 10:32:55.028112    3508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:32:55.043389    3508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0717 10:32:55.057830    3508 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0717 10:32:55.071316    3508 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0717 10:32:55.074184    3508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:32:55.083466    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:32:55.183439    3508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:32:55.197167    3508 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.5
	I0717 10:32:55.197180    3508 certs.go:194] generating shared ca certs ...
	I0717 10:32:55.197190    3508 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.197338    3508 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:32:55.197396    3508 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:32:55.197406    3508 certs.go:256] generating profile certs ...
	I0717 10:32:55.197495    3508 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key
	I0717 10:32:55.197518    3508 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7
	I0717 10:32:55.197535    3508 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0717 10:32:55.361955    3508 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7 ...
	I0717 10:32:55.361972    3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7: {Name:mk29664a7594975eea689d2f8ed48fdc71e62969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.362392    3508 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7 ...
	I0717 10:32:55.362403    3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7: {Name:mk57740b7d279f3d01c1e4241799a0ef5b1e79c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.362628    3508 certs.go:381] copying /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7 -> /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt
	I0717 10:32:55.362825    3508 certs.go:385] copying /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7 -> /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key
	I0717 10:32:55.363038    3508 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key
	I0717 10:32:55.363048    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:32:55.363071    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:32:55.363089    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:32:55.363110    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:32:55.363127    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 10:32:55.363144    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 10:32:55.363163    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 10:32:55.363191    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 10:32:55.363269    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:32:55.363307    3508 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:32:55.363315    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:32:55.363344    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:32:55.363373    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:32:55.363400    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:32:55.363474    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:32:55.363509    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:32:55.363530    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:32:55.363548    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:32:55.363978    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:32:55.392580    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:32:55.424360    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:32:55.448923    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:32:55.478217    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 10:32:55.513430    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 10:32:55.570074    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 10:32:55.603052    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 10:32:55.623021    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:32:55.641658    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:32:55.661447    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:32:55.681020    3508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 10:32:55.694280    3508 ssh_runner.go:195] Run: openssl version
	I0717 10:32:55.698669    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:32:55.707011    3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:32:55.710297    3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:32:55.710338    3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:32:55.714541    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:32:55.722665    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:32:55.730951    3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:32:55.734212    3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:32:55.734256    3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:32:55.738428    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:32:55.746621    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:32:55.754849    3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:32:55.758298    3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:32:55.758341    3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:32:55.762565    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:32:55.770829    3508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:32:55.774715    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 10:32:55.780174    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 10:32:55.784640    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 10:32:55.789061    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 10:32:55.793372    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 10:32:55.797672    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 10:32:55.802149    3508 kubeadm.go:392] StartCluster: {Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:32:55.802263    3508 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 10:32:55.813831    3508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 10:32:55.821229    3508 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 10:32:55.821245    3508 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 10:32:55.821296    3508 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 10:32:55.828842    3508 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 10:32:55.829172    3508 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-572000" does not appear in /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:32:55.829253    3508 kubeconfig.go:62] /Users/jenkins/minikube-integration/19283-1099/kubeconfig needs updating (will repair): [kubeconfig missing "ha-572000" cluster setting kubeconfig missing "ha-572000" context setting]
	I0717 10:32:55.829432    3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.829834    3508 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:32:55.830028    3508 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x71e8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 10:32:55.830325    3508 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 10:32:55.830504    3508 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 10:32:55.837614    3508 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0717 10:32:55.837631    3508 kubeadm.go:597] duration metric: took 16.382346ms to restartPrimaryControlPlane
	I0717 10:32:55.837636    3508 kubeadm.go:394] duration metric: took 35.493194ms to StartCluster
	I0717 10:32:55.837647    3508 settings.go:142] acquiring lock: {Name:mkc45f011a907c66e2dbca7dadfff37ab48f7d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.837726    3508 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:32:55.838160    3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.838398    3508 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:32:55.838411    3508 start.go:241] waiting for startup goroutines ...
	I0717 10:32:55.838425    3508 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 10:32:55.838529    3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:32:55.881476    3508 out.go:177] * Enabled addons: 
	I0717 10:32:55.902556    3508 addons.go:510] duration metric: took 64.135812ms for enable addons: enabled=[]
	I0717 10:32:55.902605    3508 start.go:246] waiting for cluster config update ...
	I0717 10:32:55.902617    3508 start.go:255] writing updated cluster config ...
	I0717 10:32:55.924553    3508 out.go:177] 
	I0717 10:32:55.945720    3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:32:55.945818    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:32:55.967938    3508 out.go:177] * Starting "ha-572000-m02" control-plane node in "ha-572000" cluster
	I0717 10:32:56.010383    3508 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:32:56.010417    3508 cache.go:56] Caching tarball of preloaded images
	I0717 10:32:56.010593    3508 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:32:56.010613    3508 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:32:56.010735    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:32:56.011714    3508 start.go:360] acquireMachinesLock for ha-572000-m02: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:32:56.011815    3508 start.go:364] duration metric: took 76.983µs to acquireMachinesLock for "ha-572000-m02"
	I0717 10:32:56.011840    3508 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:32:56.011849    3508 fix.go:54] fixHost starting: m02
	I0717 10:32:56.012268    3508 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:32:56.012290    3508 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:32:56.021749    3508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51769
	I0717 10:32:56.022134    3508 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:32:56.022452    3508 main.go:141] libmachine: Using API Version  1
	I0717 10:32:56.022466    3508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:32:56.022707    3508 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:32:56.022831    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:32:56.022920    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetState
	I0717 10:32:56.023010    3508 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:56.023088    3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3461
	I0717 10:32:56.024015    3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3461 missing from process table
	I0717 10:32:56.024031    3508 fix.go:112] recreateIfNeeded on ha-572000-m02: state=Stopped err=<nil>
	I0717 10:32:56.024040    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	W0717 10:32:56.024134    3508 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:32:56.066377    3508 out.go:177] * Restarting existing hyperkit VM for "ha-572000-m02" ...
	I0717 10:32:56.087674    3508 main.go:141] libmachine: (ha-572000-m02) Calling .Start
	I0717 10:32:56.087950    3508 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:56.087999    3508 main.go:141] libmachine: (ha-572000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid
	I0717 10:32:56.089806    3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3461 missing from process table
	I0717 10:32:56.089821    3508 main.go:141] libmachine: (ha-572000-m02) DBG | pid 3461 is in state "Stopped"
	I0717 10:32:56.089839    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid...
	I0717 10:32:56.090122    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Using UUID b5da5881-83da-4916-aec8-9a96c30c8c05
	I0717 10:32:56.117133    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Generated MAC 2:60:33:0:68:8b
	I0717 10:32:56.117180    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:32:56.117265    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b5da5881-83da-4916-aec8-9a96c30c8c05", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:32:56.117293    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b5da5881-83da-4916-aec8-9a96c30c8c05", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:32:56.117357    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b5da5881-83da-4916-aec8-9a96c30c8c05", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/ha-572000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machine
s/ha-572000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:32:56.117402    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b5da5881-83da-4916-aec8-9a96c30c8c05 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/ha-572000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:32:56.117418    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:32:56.118762    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Pid is 3526
	I0717 10:32:56.119239    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Attempt 0
	I0717 10:32:56.119252    3508 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:56.119326    3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3526
	I0717 10:32:56.121158    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Searching for 2:60:33:0:68:8b in /var/db/dhcpd_leases ...
	I0717 10:32:56.121244    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:32:56.121275    3508 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669951be}
	I0717 10:32:56.121292    3508 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:37:45:6a:f1:7f ID:1,1e:37:45:6a:f1:7f Lease:0x6698001b}
	I0717 10:32:56.121303    3508 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x6699517a}
	I0717 10:32:56.121311    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Found match: 2:60:33:0:68:8b
	I0717 10:32:56.121322    3508 main.go:141] libmachine: (ha-572000-m02) DBG | IP: 192.169.0.6
	I0717 10:32:56.121381    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetConfigRaw
	I0717 10:32:56.122119    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:32:56.122366    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:32:56.122967    3508 machine.go:94] provisionDockerMachine start ...
	I0717 10:32:56.122978    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:32:56.123097    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:32:56.123191    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:32:56.123279    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:32:56.123377    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:32:56.123509    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:32:56.123686    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:56.123860    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:32:56.123869    3508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:32:56.127424    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:32:56.136905    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:32:56.138099    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:32:56.138119    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:32:56.138127    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:32:56.138133    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:32:56.517427    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:32:56.517452    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:32:56.632129    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:32:56.632146    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:32:56.632154    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:32:56.632161    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:32:56.632978    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:32:56.632987    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:33:01.882277    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:33:01.882372    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:33:01.882381    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:33:01.905950    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:33:07.183510    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:33:07.183524    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:33:07.183678    3508 buildroot.go:166] provisioning hostname "ha-572000-m02"
	I0717 10:33:07.183687    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:33:07.183789    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.183881    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.183992    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.184084    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.184179    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.184316    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:07.184458    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:07.184466    3508 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000-m02 && echo "ha-572000-m02" | sudo tee /etc/hostname
	I0717 10:33:07.250039    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000-m02
	
	I0717 10:33:07.250065    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.250206    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.250287    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.250390    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.250483    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.250636    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:07.250802    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:07.250815    3508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:33:07.311401    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:33:07.311420    3508 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:33:07.311431    3508 buildroot.go:174] setting up certificates
	I0717 10:33:07.311441    3508 provision.go:84] configureAuth start
	I0717 10:33:07.311448    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:33:07.311593    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:33:07.311680    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.311768    3508 provision.go:143] copyHostCerts
	I0717 10:33:07.311797    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:33:07.311852    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:33:07.311858    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:33:07.312271    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:33:07.312505    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:33:07.312536    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:33:07.312541    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:33:07.312619    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:33:07.312779    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:33:07.312811    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:33:07.312816    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:33:07.312912    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:33:07.313069    3508 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000-m02 san=[127.0.0.1 192.169.0.6 ha-572000-m02 localhost minikube]
	I0717 10:33:07.375154    3508 provision.go:177] copyRemoteCerts
	I0717 10:33:07.375212    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:33:07.375227    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.375382    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.375473    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.375558    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.375656    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:33:07.409433    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:33:07.409505    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:33:07.429479    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:33:07.429539    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 10:33:07.451163    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:33:07.451231    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:33:07.471509    3508 provision.go:87] duration metric: took 160.057268ms to configureAuth
	I0717 10:33:07.471523    3508 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:33:07.471702    3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:33:07.471715    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:07.471860    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.471964    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.472045    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.472140    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.472216    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.472319    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:07.472438    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:07.472446    3508 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:33:07.526742    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:33:07.526766    3508 buildroot.go:70] root file system type: tmpfs
	I0717 10:33:07.526848    3508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:33:07.526860    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.526992    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.527094    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.527175    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.527248    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.527375    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:07.527510    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:07.527555    3508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:33:07.594480    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:33:07.594502    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.594640    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.594720    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.594808    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.594894    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.595019    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:07.595164    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:07.595178    3508 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:33:09.291500    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:33:09.291515    3508 machine.go:97] duration metric: took 13.164785942s to provisionDockerMachine
	I0717 10:33:09.291524    3508 start.go:293] postStartSetup for "ha-572000-m02" (driver="hyperkit")
	I0717 10:33:09.291531    3508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:33:09.291546    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.291729    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:33:09.291743    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:09.291855    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:09.291956    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.292049    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:09.292155    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:33:09.335381    3508 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:33:09.338532    3508 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:33:09.338541    3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:33:09.338631    3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:33:09.338771    3508 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:33:09.338778    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:33:09.338937    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:33:09.346285    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:33:09.366379    3508 start.go:296] duration metric: took 74.672934ms for postStartSetup
	I0717 10:33:09.366399    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.366579    3508 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:33:09.366592    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:09.366681    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:09.366764    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.366841    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:09.366910    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:33:09.399615    3508 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:33:09.399679    3508 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:33:09.453746    3508 fix.go:56] duration metric: took 13.437754461s for fixHost
	I0717 10:33:09.453771    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:09.453917    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:09.454023    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.454133    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.454219    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:09.454344    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:09.454500    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:09.454509    3508 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:33:09.507516    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237589.628548940
	
	I0717 10:33:09.507529    3508 fix.go:216] guest clock: 1721237589.628548940
	I0717 10:33:09.507535    3508 fix.go:229] Guest: 2024-07-17 10:33:09.62854894 -0700 PDT Remote: 2024-07-17 10:33:09.453761 -0700 PDT m=+32.267325038 (delta=174.78794ms)
	I0717 10:33:09.507545    3508 fix.go:200] guest clock delta is within tolerance: 174.78794ms
	I0717 10:33:09.507551    3508 start.go:83] releasing machines lock for "ha-572000-m02", held for 13.491465012s
	I0717 10:33:09.507572    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.507699    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:33:09.532514    3508 out.go:177] * Found network options:
	I0717 10:33:09.552891    3508 out.go:177]   - NO_PROXY=192.169.0.5
	W0717 10:33:09.574387    3508 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:33:09.574424    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.575230    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.575434    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.575533    3508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:33:09.575579    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	W0717 10:33:09.575674    3508 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:33:09.575742    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:09.575769    3508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:33:09.575787    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:09.575982    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.576003    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:09.576234    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.576305    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:09.576479    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:09.576483    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:33:09.576596    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	W0717 10:33:09.607732    3508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:33:09.607792    3508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:33:09.656923    3508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:33:09.656940    3508 start.go:495] detecting cgroup driver to use...
	I0717 10:33:09.657029    3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:33:09.673202    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:33:09.682149    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:33:09.691293    3508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:33:09.691348    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:33:09.700430    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:33:09.709231    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:33:09.718168    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:33:09.727036    3508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:33:09.736298    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:33:09.745642    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:33:09.754690    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:33:09.763621    3508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:33:09.771717    3508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:33:09.779861    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:33:09.883183    3508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:33:09.901989    3508 start.go:495] detecting cgroup driver to use...
	I0717 10:33:09.902056    3508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:33:09.919371    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:33:09.932597    3508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:33:09.953462    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:33:09.964583    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:33:09.975437    3508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:33:09.995754    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:33:10.006015    3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:33:10.020825    3508 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:33:10.023692    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:33:10.030648    3508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:33:10.044228    3508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:33:10.141170    3508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:33:10.249186    3508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:33:10.249214    3508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:33:10.263041    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:33:10.359716    3508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:34:11.416224    3508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.021941021s)
	I0717 10:34:11.416300    3508 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0717 10:34:11.450835    3508 out.go:177] 
	W0717 10:34:11.471671    3508 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 17:33:08 ha-572000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.044876852Z" level=info msg="Starting up"
	Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.045449556Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.049003475Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=496
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.064081003Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079222179Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079310364Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079376764Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079411371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079557600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079609621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079752864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079797312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079887739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079928799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.080046807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.080239575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.081923027Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.081977822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082123136Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082166838Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082275842Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082332754Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084273060Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084339651Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084378389Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084411359Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084442922Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084509418Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084664339Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084738339Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084774254Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084804627Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084874943Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084911894Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084942267Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084972768Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085003365Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085032856Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085062302Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085090775Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085129743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085161980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085192066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085224112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085253798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085286177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085315810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085345112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085374976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085410351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085440979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085471089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085500214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085532017Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085571085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085603089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085635203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085683933Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085717630Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085747936Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085777505Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085805608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085834007Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085861655Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086142807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086206245Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086259095Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086322237Z" level=info msg="containerd successfully booted in 0.022994s"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.065923436Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.108166477Z" level=info msg="Loading containers: start."
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.277192209Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.336888641Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.380874805Z" level=info msg="Loading containers: done."
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.387565385Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.387757279Z" level=info msg="Daemon has completed initialization"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.411794010Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.411982753Z" level=info msg="API listen on [::]:2376"
	Jul 17 17:33:09 ha-572000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.490943827Z" level=info msg="Processing signal 'terminated'"
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.491923813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.491997518Z" level=info msg="Daemon shutdown complete"
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.492029261Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.492040420Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 17:33:10 ha-572000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 17:33:11 ha-572000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 17:33:11 ha-572000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 17:33:11 ha-572000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 17:33:11 ha-572000-m02 dockerd[1164]: time="2024-07-17T17:33:11.528450348Z" level=info msg="Starting up"
	Jul 17 17:34:11 ha-572000-m02 dockerd[1164]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 17:34:11 ha-572000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 17:34:11 ha-572000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 17:34:11 ha-572000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0717 10:34:11.471802    3508 out.go:239] * 
	W0717 10:34:11.473037    3508 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:34:11.536857    3508 out.go:177] 
	
	
	==> Docker <==
	Jul 17 17:33:02 ha-572000 dockerd[1178]: time="2024-07-17T17:33:02.455192722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:33:23 ha-572000 dockerd[1178]: time="2024-07-17T17:33:23.740501414Z" level=info msg="shim disconnected" id=a6e0742139d63216679ae18bc09ccda23c0ed0e4d7a419b36e966729969bea9b namespace=moby
	Jul 17 17:33:23 ha-572000 dockerd[1178]: time="2024-07-17T17:33:23.740886535Z" level=warning msg="cleaning up after shim disconnected" id=a6e0742139d63216679ae18bc09ccda23c0ed0e4d7a419b36e966729969bea9b namespace=moby
	Jul 17 17:33:23 ha-572000 dockerd[1178]: time="2024-07-17T17:33:23.741204478Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 17 17:33:23 ha-572000 dockerd[1171]: time="2024-07-17T17:33:23.741723202Z" level=info msg="ignoring event" container=a6e0742139d63216679ae18bc09ccda23c0ed0e4d7a419b36e966729969bea9b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 17:33:24 ha-572000 dockerd[1178]: time="2024-07-17T17:33:24.747049658Z" level=info msg="shim disconnected" id=3f41d1a5d2c920e02b958fc57cddcf6e76eb7e4b56f3544ba5066b682eea0c8d namespace=moby
	Jul 17 17:33:24 ha-572000 dockerd[1178]: time="2024-07-17T17:33:24.747592119Z" level=warning msg="cleaning up after shim disconnected" id=3f41d1a5d2c920e02b958fc57cddcf6e76eb7e4b56f3544ba5066b682eea0c8d namespace=moby
	Jul 17 17:33:24 ha-572000 dockerd[1178]: time="2024-07-17T17:33:24.747636154Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 17 17:33:24 ha-572000 dockerd[1171]: time="2024-07-17T17:33:24.747788453Z" level=info msg="ignoring event" container=3f41d1a5d2c920e02b958fc57cddcf6e76eb7e4b56f3544ba5066b682eea0c8d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 17:33:34 ha-572000 dockerd[1178]: time="2024-07-17T17:33:34.836028865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:33:34 ha-572000 dockerd[1178]: time="2024-07-17T17:33:34.836093957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:33:34 ha-572000 dockerd[1178]: time="2024-07-17T17:33:34.836105101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:33:34 ha-572000 dockerd[1178]: time="2024-07-17T17:33:34.836225522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:33:38 ha-572000 dockerd[1178]: time="2024-07-17T17:33:38.652806846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:33:38 ha-572000 dockerd[1178]: time="2024-07-17T17:33:38.652893670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:33:38 ha-572000 dockerd[1178]: time="2024-07-17T17:33:38.652906541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:33:38 ha-572000 dockerd[1178]: time="2024-07-17T17:33:38.657845113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:33:59 ha-572000 dockerd[1171]: time="2024-07-17T17:33:59.069677227Z" level=info msg="ignoring event" container=8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 17:33:59 ha-572000 dockerd[1178]: time="2024-07-17T17:33:59.071115848Z" level=info msg="shim disconnected" id=8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f namespace=moby
	Jul 17 17:33:59 ha-572000 dockerd[1178]: time="2024-07-17T17:33:59.071609934Z" level=warning msg="cleaning up after shim disconnected" id=8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f namespace=moby
	Jul 17 17:33:59 ha-572000 dockerd[1178]: time="2024-07-17T17:33:59.071768605Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 17 17:34:00 ha-572000 dockerd[1171]: time="2024-07-17T17:34:00.079691666Z" level=info msg="ignoring event" container=1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 17:34:00 ha-572000 dockerd[1178]: time="2024-07-17T17:34:00.081342846Z" level=info msg="shim disconnected" id=1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca namespace=moby
	Jul 17 17:34:00 ha-572000 dockerd[1178]: time="2024-07-17T17:34:00.081524291Z" level=warning msg="cleaning up after shim disconnected" id=1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca namespace=moby
	Jul 17 17:34:00 ha-572000 dockerd[1178]: time="2024-07-17T17:34:00.081549356Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8f09c09ed996a       56ce0fd9fb532                                                                                         34 seconds ago       Exited              kube-apiserver            2                   6d7eb0e874999       kube-apiserver-ha-572000
	1e8f9939826f4       e874818b3caac                                                                                         38 seconds ago       Exited              kube-controller-manager   2                   b7d58c526c444       kube-controller-manager-ha-572000
	138bf6784d59c       38af8ddebf499                                                                                         About a minute ago   Running             kube-vip                  0                   df04438a4c5cc       kube-vip-ha-572000
	a53f8fcdf5d97       7820c83aa1394                                                                                         About a minute ago   Running             kube-scheduler            1                   bfd880612991e       kube-scheduler-ha-572000
	a3398a8ca33aa       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      1                   986ceb5a6f870       etcd-ha-572000
	e1a5eb1bed550       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   3 minutes ago        Exited              busybox                   0                   29ab413131af2       busybox-fc5497c4f-5r4wl
	bb44d784bb7ab       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   b8c622f08395f       coredns-7db6d8ff4d-2phrp
	7b275812468c9       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   2588bd7c40c23       coredns-7db6d8ff4d-9dzd5
	12ba2e181ee9a       6e38f40d628db                                                                                         6 minutes ago        Exited              storage-provisioner       0                   04b7cdcbedf20       storage-provisioner
	6e40e1427ab20       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              6 minutes ago        Exited              kindnet-cni               0                   32dc836c0a2df       kindnet-t85bv
	2aeed19835352       53c535741fb44                                                                                         6 minutes ago        Exited              kube-proxy                0                   f688e08d591be       kube-proxy-hst7h
	9200160f355ce       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago        Exited              kube-vip                  0                   1742f4f388abf       kube-vip-ha-572000
	e29f4fe295c1c       7820c83aa1394                                                                                         6 minutes ago        Exited              kube-scheduler            0                   25d825604d9f6       kube-scheduler-ha-572000
	c6527d620dad2       3861cfcd7c04c                                                                                         6 minutes ago        Exited              etcd                      0                   8844aab508d79       etcd-ha-572000
	
	
	==> coredns [7b275812468c] <==
	[INFO] 10.244.0.4:49035 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001628883s
	[INFO] 10.244.0.4:59665 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081818s
	[INFO] 10.244.0.4:52274 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077378s
	[INFO] 10.244.1.2:47113 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103907s
	[INFO] 10.244.2.2:57796 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060561s
	[INFO] 10.244.2.2:40339 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000425199s
	[INFO] 10.244.2.2:38339 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010468s
	[INFO] 10.244.2.2:50750 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070547s
	[INFO] 10.244.0.4:57426 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000045039s
	[INFO] 10.244.0.4:44283 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031195s
	[INFO] 10.244.1.2:56783 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117359s
	[INFO] 10.244.1.2:34315 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061358s
	[INFO] 10.244.1.2:55792 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062223s
	[INFO] 10.244.2.2:35768 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086426s
	[INFO] 10.244.2.2:42473 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102022s
	[INFO] 10.244.0.4:53524 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000087177s
	[INFO] 10.244.0.4:35224 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079092s
	[INFO] 10.244.0.4:53020 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075585s
	[INFO] 10.244.1.2:58074 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138293s
	[INFO] 10.244.1.2:40567 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000088337s
	[INFO] 10.244.1.2:46301 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000065328s
	[INFO] 10.244.1.2:59898 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075157s
	[INFO] 10.244.2.2:48754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000081888s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bb44d784bb7a] <==
	[INFO] 10.244.0.4:54052 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001019206s
	[INFO] 10.244.0.4:45412 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077905s
	[INFO] 10.244.0.4:37924 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159382s
	[INFO] 10.244.1.2:45862 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108196s
	[INFO] 10.244.1.2:47464 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000616154s
	[INFO] 10.244.1.2:53720 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059338s
	[INFO] 10.244.1.2:49774 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123053s
	[INFO] 10.244.1.2:54472 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000097287s
	[INFO] 10.244.1.2:52054 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064095s
	[INFO] 10.244.1.2:54428 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064142s
	[INFO] 10.244.2.2:53088 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109393s
	[INFO] 10.244.2.2:49993 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000114279s
	[INFO] 10.244.2.2:42708 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063661s
	[INFO] 10.244.2.2:57150 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000043434s
	[INFO] 10.244.0.4:45987 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112397s
	[INFO] 10.244.0.4:44116 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057797s
	[INFO] 10.244.1.2:37635 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105428s
	[INFO] 10.244.2.2:52081 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131248s
	[INFO] 10.244.2.2:54912 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087858s
	[INFO] 10.244.0.4:53981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079367s
	[INFO] 10.244.2.2:45850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152036s
	[INFO] 10.244.2.2:36004 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000223s
	[INFO] 10.244.2.2:54459 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000128834s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0717 17:34:12.970234    2557 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0717 17:34:12.971580    2557 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0717 17:34:12.972853    2557 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0717 17:34:12.973489    2557 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0717 17:34:12.975487    2557 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006777] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.574354] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +1.320177] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.823982] systemd-fstab-generator[476]: Ignoring "noauto" option for root device
	[  +0.112634] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
	[  +1.921936] systemd-fstab-generator[1101]: Ignoring "noauto" option for root device
	[  +0.055582] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.184169] systemd-fstab-generator[1137]: Ignoring "noauto" option for root device
	[  +0.104772] systemd-fstab-generator[1149]: Ignoring "noauto" option for root device
	[  +0.113810] systemd-fstab-generator[1163]: Ignoring "noauto" option for root device
	[  +2.482664] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.099717] systemd-fstab-generator[1391]: Ignoring "noauto" option for root device
	[  +0.099983] systemd-fstab-generator[1403]: Ignoring "noauto" option for root device
	[  +0.118727] systemd-fstab-generator[1418]: Ignoring "noauto" option for root device
	[  +0.437794] systemd-fstab-generator[1575]: Ignoring "noauto" option for root device
	[Jul17 17:33] kauditd_printk_skb: 234 callbacks suppressed
	[ +21.575073] kauditd_printk_skb: 40 callbacks suppressed
	[ +31.253030] clocksource: timekeeping watchdog on CPU1: Marking clocksource 'tsc' as unstable because the skew is too large:
	[  +0.000025] clocksource:                       'hpet' wd_now: 2db2a3c3 wd_last: 2d0e4271 mask: ffffffff
	[  +0.000022] clocksource:                       'tsc' cs_now: 5d6b30d2ea8 cs_last: 5d5e653cfb0 mask: ffffffffffffffff
	[  +0.001528] TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'.
	[  +0.002348] clocksource: Checking clocksource tsc synchronization from CPU 0.
	
	
	==> etcd [a3398a8ca33a] <==
	{"level":"info","ts":"2024-07-17T17:34:06.481084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"warn","ts":"2024-07-17T17:34:07.932636Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f6a6d94fe6ea5bc8","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:34:07.932741Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1d3f36ee75516151","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-17T17:34:07.932687Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1d3f36ee75516151","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-17T17:34:07.933493Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f6a6d94fe6ea5bc8","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-17T17:34:08.181523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:08.181875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:08.182868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:08.183418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:08.183717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:09.88894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:09.889285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:09.88953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:09.890409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:09.890542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:11.580196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:11.580274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:11.580332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:11.580874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:11.58095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"warn","ts":"2024-07-17T17:34:12.931535Z","caller":"etcdserver/server.go:2089","msg":"failed to publish local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-572000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-07-17T17:34:12.933019Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1d3f36ee75516151","rtt":"0s","error":"dial tcp 192.169.0.7:2380: i/o timeout"}
	{"level":"warn","ts":"2024-07-17T17:34:12.933094Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f6a6d94fe6ea5bc8","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:34:12.934267Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f6a6d94fe6ea5bc8","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:34:12.934292Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1d3f36ee75516151","rtt":"0s","error":"dial tcp 192.169.0.7:2380: i/o timeout"}
	
	
	==> etcd [c6527d620dad] <==
	{"level":"warn","ts":"2024-07-17T17:32:29.48769Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T17:32:24.462555Z","time spent":"5.025134128s","remote":"127.0.0.1:36734","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/07/17 17:32:29 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T17:32:29.48774Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T17:32:25.149839Z","time spent":"4.337900582s","remote":"127.0.0.1:45174","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/07/17 17:32:29 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T17:32:29.512674Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T17:32:29.512703Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T17:32:29.512731Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"warn","ts":"2024-07-17T17:32:29.512821Z","caller":"etcdserver/server.go:1165","msg":"failed to revoke lease","lease-id":"584490c1bc074071","error":"etcdserver: request cancelled"}
	{"level":"info","ts":"2024-07-17T17:32:29.512836Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.512844Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.512857Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.512905Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.512927Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.512948Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.512956Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.51296Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.512966Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.512977Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.513753Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.513778Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.516864Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.516891Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.518343Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-07-17T17:32:29.51839Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-07-17T17:32:29.518397Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-572000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 17:34:13 up 1 min,  0 users,  load average: 0.10, 0.05, 0.01
	Linux ha-572000 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6e40e1427ab2] <==
	I0717 17:31:56.892269       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:06.898213       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:06.898253       1 main.go:303] handling current node
	I0717 17:32:06.898264       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:06.898269       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:06.898416       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:06.898443       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:06.898526       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:06.898555       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:16.896377       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:16.896415       1 main.go:303] handling current node
	I0717 17:32:16.896426       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:16.896432       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:16.896606       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:16.896636       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:16.896674       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:16.896699       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:26.896557       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:26.896622       1 main.go:303] handling current node
	I0717 17:32:26.896678       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:26.896718       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:26.896938       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:26.897017       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:26.897158       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:26.897880       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8f09c09ed996] <==
	I0717 17:33:38.766324       1 options.go:221] external host was not specified, using 192.169.0.5
	I0717 17:33:38.766955       1 server.go:148] Version: v1.30.2
	I0717 17:33:38.767101       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:33:39.044188       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0717 17:33:39.046954       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 17:33:39.049409       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0717 17:33:39.049435       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0717 17:33:39.049563       1 instance.go:299] Using reconciler: lease
	W0717 17:33:59.045294       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0717 17:33:59.045986       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0717 17:33:59.051243       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0717 17:33:59.051294       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [1e8f9939826f] <==
	I0717 17:33:35.199426       1 serving.go:380] Generated self-signed cert in-memory
	I0717 17:33:35.611724       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0717 17:33:35.611860       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:33:35.612992       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 17:33:35.613172       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0717 17:33:35.613294       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0717 17:33:35.613433       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0717 17:34:00.060238       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:34220->192.169.0.5:8443: read: connection reset by peer"
	
	
	==> kube-proxy [2aeed1983535] <==
	I0717 17:27:43.315695       1 server_linux.go:69] "Using iptables proxy"
	I0717 17:27:43.322673       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0717 17:27:43.354011       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:27:43.354032       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:27:43.354044       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:27:43.355997       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:27:43.356216       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:27:43.356225       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:27:43.356874       1 config.go:192] "Starting service config controller"
	I0717 17:27:43.356903       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:27:43.356921       1 config.go:319] "Starting node config controller"
	I0717 17:27:43.356943       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:27:43.357077       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:27:43.357144       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:27:43.457513       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 17:27:43.457607       1 shared_informer.go:320] Caches are synced for node config
	I0717 17:27:43.457639       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [a53f8fcdf5d9] <==
	Trace[1679676222]: ---"Objects listed" error:Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (17:33:55.829)
	Trace[1679676222]: [10.002148793s] [10.002148793s] END
	E0717 17:33:55.829461       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	W0717 17:34:00.059485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59664->192.169.0.5:8443: read: connection reset by peer
	E0717 17:34:00.060786       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59664->192.169.0.5:8443: read: connection reset by peer
	W0717 17:34:07.801237       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:07.801736       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:08.253794       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:08.253923       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:08.255527       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:08.255685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:09.119507       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:09.120276       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:09.844542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:09.845089       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:11.002782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:11.003425       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:11.004959       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:11.005426       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:11.724425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:11.724599       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:13.328905       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:13.329009       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:13.526537       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:13.526638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	
	
	==> kube-scheduler [e29f4fe295c1] <==
	W0717 17:27:25.906822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 17:27:25.906857       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 17:27:25.906870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 17:27:25.906912       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 17:27:26.715070       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 17:27:26.715127       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 17:27:26.797242       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 17:27:26.797298       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 17:27:26.957071       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 17:27:26.957111       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 17:27:27.013148       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 17:27:27.013190       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 17:27:29.895450       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 17:30:13.328557       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-jhz2d\": pod busybox-fc5497c4f-jhz2d is already assigned to node \"ha-572000-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-jhz2d" node="ha-572000-m03"
	E0717 17:30:13.329015       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2f9e6064-727c-486c-b925-3ce5866e42ff(default/busybox-fc5497c4f-jhz2d) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-jhz2d"
	E0717 17:30:13.329121       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-jhz2d\": pod busybox-fc5497c4f-jhz2d is already assigned to node \"ha-572000-m03\"" pod="default/busybox-fc5497c4f-jhz2d"
	I0717 17:30:13.329256       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-jhz2d" node="ha-572000-m03"
	E0717 17:30:13.362412       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-zwhws\": pod busybox-fc5497c4f-zwhws is already assigned to node \"ha-572000\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-zwhws" node="ha-572000"
	E0717 17:30:13.362474       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-zwhws\": pod busybox-fc5497c4f-zwhws is already assigned to node \"ha-572000\"" pod="default/busybox-fc5497c4f-zwhws"
	E0717 17:30:13.441720       1 schedule_one.go:1067] "Error occurred" err="Pod default/busybox-fc5497c4f-l7sqr is already present in the active queue" pod="default/busybox-fc5497c4f-l7sqr"
	E0717 17:30:39.870609       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5wcph\": pod kube-proxy-5wcph is already assigned to node \"ha-572000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5wcph" node="ha-572000-m04"
	E0717 17:30:39.870661       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 731f5b57-131e-4e97-b47a-036b8d4edbcd(kube-system/kube-proxy-5wcph) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5wcph"
	E0717 17:30:39.870672       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5wcph\": pod kube-proxy-5wcph is already assigned to node \"ha-572000-m04\"" pod="kube-system/kube-proxy-5wcph"
	I0717 17:30:39.870686       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5wcph" node="ha-572000-m04"
	E0717 17:32:29.355082       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 17 17:33:55 ha-572000 kubelet[1582]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:33:55 ha-572000 kubelet[1582]: E0717 17:33:55.659345    1582 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-572000\" not found"
	Jul 17 17:33:57 ha-572000 kubelet[1582]: E0717 17:33:57.056097    1582 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-572000.17e310749989a167  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-572000,UID:ha-572000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-572000,},FirstTimestamp:2024-07-17 17:32:55.563845991 +0000 UTC m=+0.198827444,LastTimestamp:2024-07-17 17:32:55.563845991 +0000 UTC m=+0.198827444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-572000,}"
	Jul 17 17:34:00 ha-572000 kubelet[1582]: I0717 17:34:00.128178    1582 scope.go:117] "RemoveContainer" containerID="3f41d1a5d2c920e02b958fc57cddcf6e76eb7e4b56f3544ba5066b682eea0c8d"
	Jul 17 17:34:00 ha-572000 kubelet[1582]: I0717 17:34:00.129823    1582 scope.go:117] "RemoveContainer" containerID="1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca"
	Jul 17 17:34:00 ha-572000 kubelet[1582]: E0717 17:34:00.130076    1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-572000_kube-system(967ef612fac8855d0ef3892ca2ae35cc)\"" pod="kube-system/kube-controller-manager-ha-572000" podUID="967ef612fac8855d0ef3892ca2ae35cc"
	Jul 17 17:34:00 ha-572000 kubelet[1582]: I0717 17:34:00.145332    1582 scope.go:117] "RemoveContainer" containerID="8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f"
	Jul 17 17:34:00 ha-572000 kubelet[1582]: E0717 17:34:00.145681    1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-572000_kube-system(2508736da9632d6a9c9aaf9c250b4f65)\"" pod="kube-system/kube-apiserver-ha-572000" podUID="2508736da9632d6a9c9aaf9c250b4f65"
	Jul 17 17:34:00 ha-572000 kubelet[1582]: I0717 17:34:00.150461    1582 scope.go:117] "RemoveContainer" containerID="a6e0742139d63216679ae18bc09ccda23c0ed0e4d7a419b36e966729969bea9b"
	Jul 17 17:34:00 ha-572000 kubelet[1582]: I0717 17:34:00.925798    1582 kubelet_node_status.go:73] "Attempting to register node" node="ha-572000"
	Jul 17 17:34:03 ha-572000 kubelet[1582]: E0717 17:34:03.198768    1582 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-572000"
	Jul 17 17:34:03 ha-572000 kubelet[1582]: E0717 17:34:03.199285    1582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-572000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Jul 17 17:34:03 ha-572000 kubelet[1582]: I0717 17:34:03.360896    1582 scope.go:117] "RemoveContainer" containerID="1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca"
	Jul 17 17:34:03 ha-572000 kubelet[1582]: E0717 17:34:03.361398    1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-572000_kube-system(967ef612fac8855d0ef3892ca2ae35cc)\"" pod="kube-system/kube-controller-manager-ha-572000" podUID="967ef612fac8855d0ef3892ca2ae35cc"
	Jul 17 17:34:04 ha-572000 kubelet[1582]: I0717 17:34:04.792263    1582 scope.go:117] "RemoveContainer" containerID="1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca"
	Jul 17 17:34:04 ha-572000 kubelet[1582]: E0717 17:34:04.792672    1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-572000_kube-system(967ef612fac8855d0ef3892ca2ae35cc)\"" pod="kube-system/kube-controller-manager-ha-572000" podUID="967ef612fac8855d0ef3892ca2ae35cc"
	Jul 17 17:34:05 ha-572000 kubelet[1582]: I0717 17:34:05.369319    1582 scope.go:117] "RemoveContainer" containerID="8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f"
	Jul 17 17:34:05 ha-572000 kubelet[1582]: E0717 17:34:05.369956    1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-572000_kube-system(2508736da9632d6a9c9aaf9c250b4f65)\"" pod="kube-system/kube-apiserver-ha-572000" podUID="2508736da9632d6a9c9aaf9c250b4f65"
	Jul 17 17:34:05 ha-572000 kubelet[1582]: E0717 17:34:05.660481    1582 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-572000\" not found"
	Jul 17 17:34:08 ha-572000 kubelet[1582]: I0717 17:34:08.082261    1582 scope.go:117] "RemoveContainer" containerID="8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f"
	Jul 17 17:34:08 ha-572000 kubelet[1582]: E0717 17:34:08.082649    1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-572000_kube-system(2508736da9632d6a9c9aaf9c250b4f65)\"" pod="kube-system/kube-apiserver-ha-572000" podUID="2508736da9632d6a9c9aaf9c250b4f65"
	Jul 17 17:34:09 ha-572000 kubelet[1582]: E0717 17:34:09.342982    1582 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-572000.17e310749989a167  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-572000,UID:ha-572000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-572000,},FirstTimestamp:2024-07-17 17:32:55.563845991 +0000 UTC m=+0.198827444,LastTimestamp:2024-07-17 17:32:55.563845991 +0000 UTC m=+0.198827444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-572000,}"
	Jul 17 17:34:10 ha-572000 kubelet[1582]: I0717 17:34:10.207057    1582 kubelet_node_status.go:73] "Attempting to register node" node="ha-572000"
	Jul 17 17:34:12 ha-572000 kubelet[1582]: E0717 17:34:12.418909    1582 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-572000"
	Jul 17 17:34:12 ha-572000 kubelet[1582]: E0717 17:34:12.419039    1582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-572000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-572000 -n ha-572000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-572000 -n ha-572000: exit status 2 (156.383656ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-572000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (124.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (3.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-572000 node delete m03 -v=7 --alsologtostderr: exit status 83 (177.166059ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-572000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-572000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:34:14.332231    3552 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:34:14.332592    3552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:34:14.332598    3552 out.go:304] Setting ErrFile to fd 2...
	I0717 10:34:14.332602    3552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:34:14.332788    3552 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	I0717 10:34:14.333115    3552 mustload.go:65] Loading cluster: ha-572000
	I0717 10:34:14.333392    3552 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:34:14.333759    3552 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:34:14.333805    3552 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:34:14.342041    3552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51812
	I0717 10:34:14.342423    3552 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:34:14.342848    3552 main.go:141] libmachine: Using API Version  1
	I0717 10:34:14.342859    3552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:34:14.343094    3552 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:34:14.343220    3552 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:34:14.343315    3552 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:34:14.343415    3552 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3521
	I0717 10:34:14.344372    3552 host.go:66] Checking if "ha-572000" exists ...
	I0717 10:34:14.344639    3552 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:34:14.344666    3552 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:34:14.352945    3552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51814
	I0717 10:34:14.353300    3552 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:34:14.353756    3552 main.go:141] libmachine: Using API Version  1
	I0717 10:34:14.353778    3552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:34:14.354002    3552 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:34:14.354131    3552 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:34:14.354478    3552 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:34:14.354500    3552 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:34:14.362609    3552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51816
	I0717 10:34:14.362973    3552 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:34:14.363317    3552 main.go:141] libmachine: Using API Version  1
	I0717 10:34:14.363330    3552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:34:14.363568    3552 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:34:14.363685    3552 main.go:141] libmachine: (ha-572000-m02) Calling .GetState
	I0717 10:34:14.363781    3552 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:34:14.363864    3552 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3526
	I0717 10:34:14.364827    3552 host.go:66] Checking if "ha-572000-m02" exists ...
	I0717 10:34:14.365085    3552 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:34:14.365109    3552 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:34:14.373315    3552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51818
	I0717 10:34:14.373644    3552 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:34:14.374013    3552 main.go:141] libmachine: Using API Version  1
	I0717 10:34:14.374033    3552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:34:14.374227    3552 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:34:14.374348    3552 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:34:14.374700    3552 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:34:14.374724    3552 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:34:14.383257    3552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51820
	I0717 10:34:14.383631    3552 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:34:14.383981    3552 main.go:141] libmachine: Using API Version  1
	I0717 10:34:14.384006    3552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:34:14.384234    3552 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:34:14.384352    3552 main.go:141] libmachine: (ha-572000-m03) Calling .GetState
	I0717 10:34:14.384444    3552 main.go:141] libmachine: (ha-572000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:34:14.384549    3552 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid from json: 2972
	I0717 10:34:14.385522    3552 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid 2972 missing from process table
	I0717 10:34:14.408198    3552 out.go:177] * The control-plane node ha-572000-m03 host is not running: state=Stopped
	I0717 10:34:14.430003    3552 out.go:177]   To start a cluster, run: "minikube start -p ha-572000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-amd64 -p ha-572000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr: exit status 7 (263.364697ms)

                                                
                                                
-- stdout --
	ha-572000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-572000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-572000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-572000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:34:14.507980    3559 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:34:14.508260    3559 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:34:14.508266    3559 out.go:304] Setting ErrFile to fd 2...
	I0717 10:34:14.508269    3559 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:34:14.508466    3559 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	I0717 10:34:14.508642    3559 out.go:298] Setting JSON to false
	I0717 10:34:14.508666    3559 mustload.go:65] Loading cluster: ha-572000
	I0717 10:34:14.508700    3559 notify.go:220] Checking for updates...
	I0717 10:34:14.508980    3559 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:34:14.508996    3559 status.go:255] checking status of ha-572000 ...
	I0717 10:34:14.509348    3559 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:34:14.509401    3559 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:34:14.518246    3559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51823
	I0717 10:34:14.518576    3559 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:34:14.518976    3559 main.go:141] libmachine: Using API Version  1
	I0717 10:34:14.518984    3559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:34:14.519212    3559 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:34:14.519325    3559 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:34:14.519415    3559 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:34:14.519480    3559 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3521
	I0717 10:34:14.520432    3559 status.go:330] ha-572000 host status = "Running" (err=<nil>)
	I0717 10:34:14.520453    3559 host.go:66] Checking if "ha-572000" exists ...
	I0717 10:34:14.520680    3559 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:34:14.520700    3559 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:34:14.528962    3559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51825
	I0717 10:34:14.529324    3559 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:34:14.529691    3559 main.go:141] libmachine: Using API Version  1
	I0717 10:34:14.529708    3559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:34:14.529951    3559 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:34:14.530091    3559 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:34:14.536454    3559 host.go:66] Checking if "ha-572000" exists ...
	I0717 10:34:14.536703    3559 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:34:14.536727    3559 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:34:14.545077    3559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51827
	I0717 10:34:14.545367    3559 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:34:14.545686    3559 main.go:141] libmachine: Using API Version  1
	I0717 10:34:14.545703    3559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:34:14.545911    3559 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:34:14.546024    3559 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:34:14.546171    3559 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 10:34:14.546191    3559 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:34:14.546267    3559 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:34:14.546354    3559 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:34:14.546443    3559 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:34:14.546532    3559 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:34:14.578212    3559 ssh_runner.go:195] Run: systemctl --version
	I0717 10:34:14.582770    3559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:34:14.597521    3559 kubeconfig.go:125] found "ha-572000" server: "https://192.169.0.254:8443"
	I0717 10:34:14.597543    3559 api_server.go:166] Checking apiserver status ...
	I0717 10:34:14.597584    3559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 10:34:14.611558    3559 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 10:34:14.611568    3559 status.go:422] ha-572000 apiserver status = Running (err=<nil>)
	I0717 10:34:14.611578    3559 status.go:257] ha-572000 status: &{Name:ha-572000 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 10:34:14.611589    3559 status.go:255] checking status of ha-572000-m02 ...
	I0717 10:34:14.611849    3559 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:34:14.611871    3559 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:34:14.620580    3559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51830
	I0717 10:34:14.620959    3559 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:34:14.621317    3559 main.go:141] libmachine: Using API Version  1
	I0717 10:34:14.621333    3559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:34:14.621548    3559 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:34:14.621675    3559 main.go:141] libmachine: (ha-572000-m02) Calling .GetState
	I0717 10:34:14.621773    3559 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:34:14.621837    3559 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3526
	I0717 10:34:14.622813    3559 status.go:330] ha-572000-m02 host status = "Running" (err=<nil>)
	I0717 10:34:14.622822    3559 host.go:66] Checking if "ha-572000-m02" exists ...
	I0717 10:34:14.623072    3559 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:34:14.623094    3559 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:34:14.631538    3559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51832
	I0717 10:34:14.631891    3559 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:34:14.632246    3559 main.go:141] libmachine: Using API Version  1
	I0717 10:34:14.632260    3559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:34:14.632479    3559 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:34:14.632594    3559 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:34:14.632677    3559 host.go:66] Checking if "ha-572000-m02" exists ...
	I0717 10:34:14.632933    3559 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:34:14.632956    3559 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:34:14.641174    3559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51834
	I0717 10:34:14.641500    3559 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:34:14.641821    3559 main.go:141] libmachine: Using API Version  1
	I0717 10:34:14.641837    3559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:34:14.642052    3559 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:34:14.642170    3559 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:34:14.642315    3559 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 10:34:14.642326    3559 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:34:14.642413    3559 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:34:14.642491    3559 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:34:14.642584    3559 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:34:14.642659    3559 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:34:14.673666    3559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:34:14.684191    3559 kubeconfig.go:125] found "ha-572000" server: "https://192.169.0.254:8443"
	I0717 10:34:14.684205    3559 api_server.go:166] Checking apiserver status ...
	I0717 10:34:14.684243    3559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 10:34:14.694011    3559 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 10:34:14.694022    3559 status.go:422] ha-572000-m02 apiserver status = Stopped (err=<nil>)
	I0717 10:34:14.694030    3559 status.go:257] ha-572000-m02 status: &{Name:ha-572000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 10:34:14.694040    3559 status.go:255] checking status of ha-572000-m03 ...
	I0717 10:34:14.694306    3559 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:34:14.694332    3559 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:34:14.702872    3559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51837
	I0717 10:34:14.703212    3559 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:34:14.703596    3559 main.go:141] libmachine: Using API Version  1
	I0717 10:34:14.703613    3559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:34:14.703828    3559 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:34:14.703950    3559 main.go:141] libmachine: (ha-572000-m03) Calling .GetState
	I0717 10:34:14.704045    3559 main.go:141] libmachine: (ha-572000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:34:14.704115    3559 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid from json: 2972
	I0717 10:34:14.705086    3559 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid 2972 missing from process table
	I0717 10:34:14.705106    3559 status.go:330] ha-572000-m03 host status = "Stopped" (err=<nil>)
	I0717 10:34:14.705114    3559 status.go:343] host is not running, skipping remaining checks
	I0717 10:34:14.705121    3559 status.go:257] ha-572000-m03 status: &{Name:ha-572000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 10:34:14.705133    3559 status.go:255] checking status of ha-572000-m04 ...
	I0717 10:34:14.705393    3559 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:34:14.705429    3559 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:34:14.713790    3559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51839
	I0717 10:34:14.714121    3559 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:34:14.714460    3559 main.go:141] libmachine: Using API Version  1
	I0717 10:34:14.714478    3559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:34:14.714683    3559 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:34:14.714791    3559 main.go:141] libmachine: (ha-572000-m04) Calling .GetState
	I0717 10:34:14.714869    3559 main.go:141] libmachine: (ha-572000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:34:14.714959    3559 main.go:141] libmachine: (ha-572000-m04) DBG | hyperkit pid from json: 3096
	I0717 10:34:14.715910    3559 main.go:141] libmachine: (ha-572000-m04) DBG | hyperkit pid 3096 missing from process table
	I0717 10:34:14.715932    3559 status.go:330] ha-572000-m04 host status = "Stopped" (err=<nil>)
	I0717 10:34:14.715939    3559 status.go:343] host is not running, skipping remaining checks
	I0717 10:34:14.715946    3559 status.go:257] ha-572000-m04 status: &{Name:ha-572000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-572000 -n ha-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-572000 -n ha-572000: exit status 2 (152.887447ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-572000 logs -n 25: (2.212154004s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m02 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m03_ha-572000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m03:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04:/home/docker/cp-test_ha-572000-m03_ha-572000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m04 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m03_ha-572000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-572000 cp testdata/cp-test.txt                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3497274976/001/cp-test_ha-572000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000:/home/docker/cp-test_ha-572000-m04_ha-572000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000 sudo cat                                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m02:/home/docker/cp-test_ha-572000-m04_ha-572000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m02 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m03:/home/docker/cp-test_ha-572000-m04_ha-572000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m03 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-572000 node stop m02 -v=7                                                                                                 | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-572000 node start m02 -v=7                                                                                                | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:32 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-572000 -v=7                                                                                                       | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-572000 -v=7                                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT | 17 Jul 24 10:32 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-572000 --wait=true -v=7                                                                                                | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-572000                                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:34 PDT |                     |
	| node    | ha-572000 node delete m03 -v=7                                                                                               | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:34 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 10:32:37
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 10:32:37.218202    3508 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:32:37.218482    3508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:32:37.218488    3508 out.go:304] Setting ErrFile to fd 2...
	I0717 10:32:37.218492    3508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:32:37.218678    3508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	I0717 10:32:37.220111    3508 out.go:298] Setting JSON to false
	I0717 10:32:37.243881    3508 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1928,"bootTime":1721235629,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0717 10:32:37.243971    3508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:32:37.265852    3508 out.go:177] * [ha-572000] minikube v1.33.1 on Darwin 14.5
	I0717 10:32:37.307717    3508 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:32:37.307783    3508 notify.go:220] Checking for updates...
	I0717 10:32:37.352082    3508 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:32:37.394723    3508 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 10:32:37.416561    3508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:32:37.437566    3508 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	I0717 10:32:37.458758    3508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:32:37.480259    3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:32:37.480391    3508 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:32:37.481074    3508 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:32:37.481147    3508 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:32:37.491120    3508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51745
	I0717 10:32:37.491492    3508 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:32:37.491919    3508 main.go:141] libmachine: Using API Version  1
	I0717 10:32:37.491928    3508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:32:37.492189    3508 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:32:37.492307    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:37.520549    3508 out.go:177] * Using the hyperkit driver based on existing profile
	I0717 10:32:37.563535    3508 start.go:297] selected driver: hyperkit
	I0717 10:32:37.563555    3508 start.go:901] validating driver "hyperkit" against &{Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:32:37.563770    3508 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:32:37.563903    3508 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:32:37.564063    3508 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19283-1099/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0717 10:32:37.572774    3508 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0717 10:32:37.578697    3508 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:32:37.578722    3508 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0717 10:32:37.582004    3508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:32:37.582058    3508 cni.go:84] Creating CNI manager for ""
	I0717 10:32:37.582066    3508 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 10:32:37.582150    3508 start.go:340] cluster config:
	{Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:32:37.582277    3508 iso.go:125] acquiring lock: {Name:mkf51f842bcc8a77e9c7c50d642c4c76848e96af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:32:37.624644    3508 out.go:177] * Starting "ha-572000" primary control-plane node in "ha-572000" cluster
	I0717 10:32:37.645662    3508 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:32:37.645750    3508 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0717 10:32:37.645778    3508 cache.go:56] Caching tarball of preloaded images
	I0717 10:32:37.645983    3508 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:32:37.646002    3508 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:32:37.646175    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:32:37.647084    3508 start.go:360] acquireMachinesLock for ha-572000: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:32:37.647209    3508 start.go:364] duration metric: took 99.885µs to acquireMachinesLock for "ha-572000"
	I0717 10:32:37.647240    3508 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:32:37.647261    3508 fix.go:54] fixHost starting: 
	I0717 10:32:37.647673    3508 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:32:37.647700    3508 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:32:37.656651    3508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51747
	I0717 10:32:37.657021    3508 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:32:37.657336    3508 main.go:141] libmachine: Using API Version  1
	I0717 10:32:37.657346    3508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:32:37.657590    3508 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:32:37.657719    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:37.657832    3508 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:32:37.657936    3508 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:37.658021    3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 2926
	I0717 10:32:37.658989    3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 2926 missing from process table
	I0717 10:32:37.658986    3508 fix.go:112] recreateIfNeeded on ha-572000: state=Stopped err=<nil>
	I0717 10:32:37.659004    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	W0717 10:32:37.659109    3508 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:32:37.701727    3508 out.go:177] * Restarting existing hyperkit VM for "ha-572000" ...
	I0717 10:32:37.722485    3508 main.go:141] libmachine: (ha-572000) Calling .Start
	I0717 10:32:37.722730    3508 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:37.722799    3508 main.go:141] libmachine: (ha-572000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid
	I0717 10:32:37.724830    3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 2926 missing from process table
	I0717 10:32:37.724872    3508 main.go:141] libmachine: (ha-572000) DBG | pid 2926 is in state "Stopped"
	I0717 10:32:37.724889    3508 main.go:141] libmachine: (ha-572000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid...
	I0717 10:32:37.725226    3508 main.go:141] libmachine: (ha-572000) DBG | Using UUID 5f2666de-0b32-4258-9840-7856c1bd4173
	I0717 10:32:37.837447    3508 main.go:141] libmachine: (ha-572000) DBG | Generated MAC d2:a6:10:ad:80:98
	I0717 10:32:37.837476    3508 main.go:141] libmachine: (ha-572000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:32:37.837593    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f2666de-0b32-4258-9840-7856c1bd4173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bc9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:32:37.837631    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f2666de-0b32-4258-9840-7856c1bd4173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bc9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:32:37.837679    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5f2666de-0b32-4258-9840-7856c1bd4173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/ha-572000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:32:37.837720    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5f2666de-0b32-4258-9840-7856c1bd4173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/ha-572000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:32:37.837736    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:32:37.839166    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Pid is 3521
	I0717 10:32:37.839653    3508 main.go:141] libmachine: (ha-572000) DBG | Attempt 0
	I0717 10:32:37.839674    3508 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:37.839714    3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3521
	I0717 10:32:37.841412    3508 main.go:141] libmachine: (ha-572000) DBG | Searching for d2:a6:10:ad:80:98 in /var/db/dhcpd_leases ...
	I0717 10:32:37.841498    3508 main.go:141] libmachine: (ha-572000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:32:37.841515    3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:37:45:6a:f1:7f ID:1,1e:37:45:6a:f1:7f Lease:0x6698001b}
	I0717 10:32:37.841527    3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x6699517a}
	I0717 10:32:37.841536    3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:d3:62:da:43:cf ID:1,6e:d3:62:da:43:cf Lease:0x669950e4}
	I0717 10:32:37.841559    3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x66994ff6}
	I0717 10:32:37.841570    3508 main.go:141] libmachine: (ha-572000) DBG | Found match: d2:a6:10:ad:80:98
	I0717 10:32:37.841595    3508 main.go:141] libmachine: (ha-572000) DBG | IP: 192.169.0.5
	I0717 10:32:37.841705    3508 main.go:141] libmachine: (ha-572000) Calling .GetConfigRaw
	I0717 10:32:37.842357    3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:32:37.842580    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:32:37.843052    3508 machine.go:94] provisionDockerMachine start ...
	I0717 10:32:37.843065    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:37.843201    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:37.843303    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:37.843420    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:37.843572    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:37.843663    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:37.843791    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:37.844002    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:37.844014    3508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:32:37.847060    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:32:37.898878    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:32:37.899633    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:32:37.899658    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:32:37.899668    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:32:37.899678    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:32:38.277909    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:32:38.277922    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:32:38.392613    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:32:38.392633    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:32:38.392644    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:32:38.392676    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:32:38.393519    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:32:38.393530    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:32:43.648108    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:32:43.648154    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:32:43.648161    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:32:43.672680    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:32:48.904402    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:32:48.904418    3508 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:32:48.904582    3508 buildroot.go:166] provisioning hostname "ha-572000"
	I0717 10:32:48.904593    3508 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:32:48.904692    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:48.904776    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:48.904887    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:48.904976    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:48.905073    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:48.905225    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:48.905383    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:48.905392    3508 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000 && echo "ha-572000" | sudo tee /etc/hostname
	I0717 10:32:48.967564    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000
	
	I0717 10:32:48.967584    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:48.967740    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:48.967836    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:48.967934    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:48.968014    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:48.968132    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:48.968282    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:48.968293    3508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:32:49.026313    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:32:49.026336    3508 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:32:49.026353    3508 buildroot.go:174] setting up certificates
	I0717 10:32:49.026367    3508 provision.go:84] configureAuth start
	I0717 10:32:49.026375    3508 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:32:49.026507    3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:32:49.026613    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:49.026706    3508 provision.go:143] copyHostCerts
	I0717 10:32:49.026741    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:32:49.026811    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:32:49.026819    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:32:49.026972    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:32:49.027200    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:32:49.027231    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:32:49.027236    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:32:49.027325    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:32:49.027487    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:32:49.027519    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:32:49.027524    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:32:49.027590    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:32:49.027748    3508 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000 san=[127.0.0.1 192.169.0.5 ha-572000 localhost minikube]
	I0717 10:32:49.085766    3508 provision.go:177] copyRemoteCerts
	I0717 10:32:49.085812    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:32:49.085827    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:49.086112    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:49.086217    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.086305    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:49.086395    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:32:49.120573    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:32:49.120648    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:32:49.139510    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:32:49.139585    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0717 10:32:49.158247    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:32:49.158317    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:32:49.177520    3508 provision.go:87] duration metric: took 151.137832ms to configureAuth
	I0717 10:32:49.177532    3508 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:32:49.177693    3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:32:49.177706    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:49.177837    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:49.177945    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:49.178031    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.178106    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.178195    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:49.178315    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:49.178439    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:49.178454    3508 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:32:49.231928    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:32:49.231939    3508 buildroot.go:70] root file system type: tmpfs
	I0717 10:32:49.232011    3508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:32:49.232025    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:49.232158    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:49.232247    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.232341    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.232427    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:49.232563    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:49.232710    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:49.232755    3508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:32:49.295280    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:32:49.295308    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:49.295446    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:49.295550    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.295637    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.295723    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:49.295852    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:49.295991    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:49.296003    3508 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:32:50.972633    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:32:50.972648    3508 machine.go:97] duration metric: took 13.129388483s to provisionDockerMachine
	I0717 10:32:50.972660    3508 start.go:293] postStartSetup for "ha-572000" (driver="hyperkit")
	I0717 10:32:50.972668    3508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:32:50.972678    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:50.972893    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:32:50.972908    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:50.973007    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:50.973108    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:50.973193    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:50.973281    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:32:51.011765    3508 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:32:51.016752    3508 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:32:51.016768    3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:32:51.016865    3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:32:51.017004    3508 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:32:51.017011    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:32:51.017179    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:32:51.027779    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:32:51.057568    3508 start.go:296] duration metric: took 84.89741ms for postStartSetup
	I0717 10:32:51.057590    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:51.057768    3508 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:32:51.057780    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:51.057871    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:51.057953    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:51.058038    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:51.058120    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:32:51.090670    3508 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:32:51.090728    3508 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:32:51.124190    3508 fix.go:56] duration metric: took 13.476731728s for fixHost
	I0717 10:32:51.124211    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:51.124344    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:51.124460    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:51.124556    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:51.124646    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:51.124769    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:51.124925    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:51.124933    3508 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:32:51.178019    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237571.303332168
	
	I0717 10:32:51.178031    3508 fix.go:216] guest clock: 1721237571.303332168
	I0717 10:32:51.178046    3508 fix.go:229] Guest: 2024-07-17 10:32:51.303332168 -0700 PDT Remote: 2024-07-17 10:32:51.124202 -0700 PDT m=+13.941974821 (delta=179.130168ms)
	I0717 10:32:51.178065    3508 fix.go:200] guest clock delta is within tolerance: 179.130168ms
	I0717 10:32:51.178069    3508 start.go:83] releasing machines lock for "ha-572000", held for 13.530645229s
	I0717 10:32:51.178090    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:51.178220    3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:32:51.178321    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:51.178658    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:51.178764    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:51.178848    3508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:32:51.178881    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:51.178898    3508 ssh_runner.go:195] Run: cat /version.json
	I0717 10:32:51.178911    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:51.178978    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:51.179001    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:51.179061    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:51.179087    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:51.179158    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:51.179178    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:51.179272    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:32:51.179286    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:32:51.214891    3508 ssh_runner.go:195] Run: systemctl --version
	I0717 10:32:51.259994    3508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 10:32:51.264962    3508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:32:51.265002    3508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:32:51.277704    3508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:32:51.277717    3508 start.go:495] detecting cgroup driver to use...
	I0717 10:32:51.277809    3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:32:51.295436    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:32:51.304332    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:32:51.313061    3508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:32:51.313115    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:32:51.321793    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:32:51.330506    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:32:51.339262    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:32:51.347997    3508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:32:51.356934    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:32:51.365798    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:32:51.374520    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:32:51.383330    3508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:32:51.391096    3508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:32:51.398988    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:32:51.492043    3508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:32:51.510670    3508 start.go:495] detecting cgroup driver to use...
	I0717 10:32:51.510748    3508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:32:51.522109    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:32:51.533578    3508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:32:51.547583    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:32:51.558324    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:32:51.568495    3508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:32:51.586295    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:32:51.596174    3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:32:51.611388    3508 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:32:51.614154    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:32:51.621515    3508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:32:51.636315    3508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:32:51.730805    3508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:32:51.833325    3508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:32:51.833396    3508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:32:51.849329    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:32:51.950120    3508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:32:54.304256    3508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.354082061s)
	I0717 10:32:54.304312    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:32:54.314507    3508 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 10:32:54.327160    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:32:54.337277    3508 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:32:54.428967    3508 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:32:54.528124    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:32:54.629785    3508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:32:54.644492    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:32:54.655322    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:32:54.750191    3508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:32:54.814687    3508 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:32:54.814779    3508 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:32:54.819517    3508 start.go:563] Will wait 60s for crictl version
	I0717 10:32:54.819571    3508 ssh_runner.go:195] Run: which crictl
	I0717 10:32:54.823230    3508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:32:54.848640    3508 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:32:54.848713    3508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:32:54.866198    3508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:32:54.925410    3508 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:32:54.925479    3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:32:54.925865    3508 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:32:54.930367    3508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:32:54.939983    3508 kubeadm.go:883] updating cluster {Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 10:32:54.940088    3508 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:32:54.940151    3508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 10:32:54.953243    3508 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0717 10:32:54.953256    3508 docker.go:615] Images already preloaded, skipping extraction
	I0717 10:32:54.953343    3508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 10:32:54.966247    3508 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0717 10:32:54.966267    3508 cache_images.go:84] Images are preloaded, skipping loading
	I0717 10:32:54.966280    3508 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.30.2 docker true true} ...
	I0717 10:32:54.966352    3508 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:32:54.966420    3508 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 10:32:54.987201    3508 cni.go:84] Creating CNI manager for ""
	I0717 10:32:54.987214    3508 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 10:32:54.987234    3508 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 10:32:54.987251    3508 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-572000 NodeName:ha-572000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 10:32:54.987337    3508 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-572000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 10:32:54.987354    3508 kube-vip.go:115] generating kube-vip config ...
	I0717 10:32:54.987400    3508 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 10:32:54.999700    3508 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 10:32:54.999787    3508 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 10:32:54.999838    3508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:32:55.007455    3508 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:32:55.007500    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 10:32:55.014894    3508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0717 10:32:55.028112    3508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:32:55.043389    3508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0717 10:32:55.057830    3508 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0717 10:32:55.071316    3508 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0717 10:32:55.074184    3508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:32:55.083466    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:32:55.183439    3508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:32:55.197167    3508 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.5
	I0717 10:32:55.197180    3508 certs.go:194] generating shared ca certs ...
	I0717 10:32:55.197190    3508 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.197338    3508 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:32:55.197396    3508 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:32:55.197406    3508 certs.go:256] generating profile certs ...
	I0717 10:32:55.197495    3508 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key
	I0717 10:32:55.197518    3508 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7
	I0717 10:32:55.197535    3508 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0717 10:32:55.361955    3508 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7 ...
	I0717 10:32:55.361972    3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7: {Name:mk29664a7594975eea689d2f8ed48fdc71e62969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.362392    3508 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7 ...
	I0717 10:32:55.362403    3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7: {Name:mk57740b7d279f3d01c1e4241799a0ef5b1e79c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.362628    3508 certs.go:381] copying /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7 -> /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt
	I0717 10:32:55.362825    3508 certs.go:385] copying /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7 -> /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key
	I0717 10:32:55.363038    3508 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key
	I0717 10:32:55.363048    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:32:55.363071    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:32:55.363089    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:32:55.363110    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:32:55.363127    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 10:32:55.363144    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 10:32:55.363163    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 10:32:55.363191    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 10:32:55.363269    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:32:55.363307    3508 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:32:55.363315    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:32:55.363344    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:32:55.363373    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:32:55.363400    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:32:55.363474    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:32:55.363509    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:32:55.363530    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:32:55.363548    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:32:55.363978    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:32:55.392580    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:32:55.424360    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:32:55.448923    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:32:55.478217    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 10:32:55.513430    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 10:32:55.570074    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 10:32:55.603052    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 10:32:55.623021    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:32:55.641658    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:32:55.661447    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:32:55.681020    3508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 10:32:55.694280    3508 ssh_runner.go:195] Run: openssl version
	I0717 10:32:55.698669    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:32:55.707011    3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:32:55.710297    3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:32:55.710338    3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:32:55.714541    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:32:55.722665    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:32:55.730951    3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:32:55.734212    3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:32:55.734256    3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:32:55.738428    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:32:55.746621    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:32:55.754849    3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:32:55.758298    3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:32:55.758341    3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:32:55.762565    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:32:55.770829    3508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:32:55.774715    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 10:32:55.780174    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 10:32:55.784640    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 10:32:55.789061    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 10:32:55.793372    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 10:32:55.797672    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 10:32:55.802149    3508 kubeadm.go:392] StartCluster: {Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:32:55.802263    3508 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 10:32:55.813831    3508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 10:32:55.821229    3508 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 10:32:55.821245    3508 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 10:32:55.821296    3508 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 10:32:55.828842    3508 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 10:32:55.829172    3508 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-572000" does not appear in /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:32:55.829253    3508 kubeconfig.go:62] /Users/jenkins/minikube-integration/19283-1099/kubeconfig needs updating (will repair): [kubeconfig missing "ha-572000" cluster setting kubeconfig missing "ha-572000" context setting]
	I0717 10:32:55.829432    3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.829834    3508 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:32:55.830028    3508 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x71e8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 10:32:55.830325    3508 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 10:32:55.830504    3508 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 10:32:55.837614    3508 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0717 10:32:55.837631    3508 kubeadm.go:597] duration metric: took 16.382346ms to restartPrimaryControlPlane
	I0717 10:32:55.837636    3508 kubeadm.go:394] duration metric: took 35.493194ms to StartCluster
	I0717 10:32:55.837647    3508 settings.go:142] acquiring lock: {Name:mkc45f011a907c66e2dbca7dadfff37ab48f7d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.837726    3508 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:32:55.838160    3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.838398    3508 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:32:55.838411    3508 start.go:241] waiting for startup goroutines ...
	I0717 10:32:55.838425    3508 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 10:32:55.838529    3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:32:55.881476    3508 out.go:177] * Enabled addons: 
	I0717 10:32:55.902556    3508 addons.go:510] duration metric: took 64.135812ms for enable addons: enabled=[]
	I0717 10:32:55.902605    3508 start.go:246] waiting for cluster config update ...
	I0717 10:32:55.902617    3508 start.go:255] writing updated cluster config ...
	I0717 10:32:55.924553    3508 out.go:177] 
	I0717 10:32:55.945720    3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:32:55.945818    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:32:55.967938    3508 out.go:177] * Starting "ha-572000-m02" control-plane node in "ha-572000" cluster
	I0717 10:32:56.010383    3508 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:32:56.010417    3508 cache.go:56] Caching tarball of preloaded images
	I0717 10:32:56.010593    3508 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:32:56.010613    3508 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:32:56.010735    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:32:56.011714    3508 start.go:360] acquireMachinesLock for ha-572000-m02: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:32:56.011815    3508 start.go:364] duration metric: took 76.983µs to acquireMachinesLock for "ha-572000-m02"
	I0717 10:32:56.011840    3508 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:32:56.011849    3508 fix.go:54] fixHost starting: m02
	I0717 10:32:56.012268    3508 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:32:56.012290    3508 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:32:56.021749    3508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51769
	I0717 10:32:56.022134    3508 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:32:56.022452    3508 main.go:141] libmachine: Using API Version  1
	I0717 10:32:56.022466    3508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:32:56.022707    3508 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:32:56.022831    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:32:56.022920    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetState
	I0717 10:32:56.023010    3508 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:56.023088    3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3461
	I0717 10:32:56.024015    3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3461 missing from process table
	I0717 10:32:56.024031    3508 fix.go:112] recreateIfNeeded on ha-572000-m02: state=Stopped err=<nil>
	I0717 10:32:56.024040    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	W0717 10:32:56.024134    3508 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:32:56.066377    3508 out.go:177] * Restarting existing hyperkit VM for "ha-572000-m02" ...
	I0717 10:32:56.087674    3508 main.go:141] libmachine: (ha-572000-m02) Calling .Start
	I0717 10:32:56.087950    3508 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:56.087999    3508 main.go:141] libmachine: (ha-572000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid
	I0717 10:32:56.089806    3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3461 missing from process table
	I0717 10:32:56.089821    3508 main.go:141] libmachine: (ha-572000-m02) DBG | pid 3461 is in state "Stopped"
	I0717 10:32:56.089839    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid...
	I0717 10:32:56.090122    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Using UUID b5da5881-83da-4916-aec8-9a96c30c8c05
	I0717 10:32:56.117133    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Generated MAC 2:60:33:0:68:8b
	I0717 10:32:56.117180    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:32:56.117265    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b5da5881-83da-4916-aec8-9a96c30c8c05", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:32:56.117293    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b5da5881-83da-4916-aec8-9a96c30c8c05", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:32:56.117357    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b5da5881-83da-4916-aec8-9a96c30c8c05", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/ha-572000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machine
s/ha-572000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:32:56.117402    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b5da5881-83da-4916-aec8-9a96c30c8c05 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/ha-572000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:32:56.117418    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:32:56.118762    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Pid is 3526
	I0717 10:32:56.119239    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Attempt 0
	I0717 10:32:56.119252    3508 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:56.119326    3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3526
	I0717 10:32:56.121158    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Searching for 2:60:33:0:68:8b in /var/db/dhcpd_leases ...
	I0717 10:32:56.121244    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:32:56.121275    3508 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669951be}
	I0717 10:32:56.121292    3508 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:37:45:6a:f1:7f ID:1,1e:37:45:6a:f1:7f Lease:0x6698001b}
	I0717 10:32:56.121303    3508 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x6699517a}
	I0717 10:32:56.121311    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Found match: 2:60:33:0:68:8b
	I0717 10:32:56.121322    3508 main.go:141] libmachine: (ha-572000-m02) DBG | IP: 192.169.0.6
	I0717 10:32:56.121381    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetConfigRaw
	I0717 10:32:56.122119    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:32:56.122366    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:32:56.122967    3508 machine.go:94] provisionDockerMachine start ...
	I0717 10:32:56.122978    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:32:56.123097    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:32:56.123191    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:32:56.123279    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:32:56.123377    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:32:56.123509    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:32:56.123686    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:56.123860    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:32:56.123869    3508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:32:56.127424    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:32:56.136905    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:32:56.138099    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:32:56.138119    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:32:56.138127    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:32:56.138133    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:32:56.517427    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:32:56.517452    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:32:56.632129    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:32:56.632146    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:32:56.632154    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:32:56.632161    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:32:56.632978    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:32:56.632987    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:33:01.882277    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:33:01.882372    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:33:01.882381    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:33:01.905950    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:33:07.183510    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:33:07.183524    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:33:07.183678    3508 buildroot.go:166] provisioning hostname "ha-572000-m02"
	I0717 10:33:07.183687    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:33:07.183789    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.183881    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.183992    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.184084    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.184179    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.184316    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:07.184458    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:07.184466    3508 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000-m02 && echo "ha-572000-m02" | sudo tee /etc/hostname
	I0717 10:33:07.250039    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000-m02
	
	I0717 10:33:07.250065    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.250206    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.250287    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.250390    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.250483    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.250636    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:07.250802    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:07.250815    3508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:33:07.311401    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:33:07.311420    3508 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:33:07.311431    3508 buildroot.go:174] setting up certificates
	I0717 10:33:07.311441    3508 provision.go:84] configureAuth start
	I0717 10:33:07.311448    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:33:07.311593    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:33:07.311680    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.311768    3508 provision.go:143] copyHostCerts
	I0717 10:33:07.311797    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:33:07.311852    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:33:07.311858    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:33:07.312271    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:33:07.312505    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:33:07.312536    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:33:07.312541    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:33:07.312619    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:33:07.312779    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:33:07.312811    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:33:07.312816    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:33:07.312912    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:33:07.313069    3508 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000-m02 san=[127.0.0.1 192.169.0.6 ha-572000-m02 localhost minikube]
	I0717 10:33:07.375154    3508 provision.go:177] copyRemoteCerts
	I0717 10:33:07.375212    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:33:07.375227    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.375382    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.375473    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.375558    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.375656    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:33:07.409433    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:33:07.409505    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:33:07.429479    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:33:07.429539    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 10:33:07.451163    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:33:07.451231    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:33:07.471509    3508 provision.go:87] duration metric: took 160.057268ms to configureAuth
	I0717 10:33:07.471523    3508 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:33:07.471702    3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:33:07.471715    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:07.471860    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.471964    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.472045    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.472140    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.472216    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.472319    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:07.472438    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:07.472446    3508 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:33:07.526742    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:33:07.526766    3508 buildroot.go:70] root file system type: tmpfs
	I0717 10:33:07.526848    3508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:33:07.526860    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.526992    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.527094    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.527175    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.527248    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.527375    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:07.527510    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:07.527555    3508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:33:07.594480    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:33:07.594502    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.594640    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.594720    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.594808    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.594894    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.595019    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:07.595164    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:07.595178    3508 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:33:09.291500    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:33:09.291515    3508 machine.go:97] duration metric: took 13.164785942s to provisionDockerMachine
	I0717 10:33:09.291524    3508 start.go:293] postStartSetup for "ha-572000-m02" (driver="hyperkit")
	I0717 10:33:09.291531    3508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:33:09.291546    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.291729    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:33:09.291743    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:09.291855    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:09.291956    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.292049    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:09.292155    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:33:09.335381    3508 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:33:09.338532    3508 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:33:09.338541    3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:33:09.338631    3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:33:09.338771    3508 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:33:09.338778    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:33:09.338937    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:33:09.346285    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:33:09.366379    3508 start.go:296] duration metric: took 74.672934ms for postStartSetup
	I0717 10:33:09.366399    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.366579    3508 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:33:09.366592    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:09.366681    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:09.366764    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.366841    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:09.366910    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:33:09.399615    3508 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:33:09.399679    3508 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:33:09.453746    3508 fix.go:56] duration metric: took 13.437754461s for fixHost
	I0717 10:33:09.453771    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:09.453917    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:09.454023    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.454133    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.454219    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:09.454344    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:09.454500    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:09.454509    3508 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:33:09.507516    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237589.628548940
	
	I0717 10:33:09.507529    3508 fix.go:216] guest clock: 1721237589.628548940
	I0717 10:33:09.507535    3508 fix.go:229] Guest: 2024-07-17 10:33:09.62854894 -0700 PDT Remote: 2024-07-17 10:33:09.453761 -0700 PDT m=+32.267325038 (delta=174.78794ms)
	I0717 10:33:09.507545    3508 fix.go:200] guest clock delta is within tolerance: 174.78794ms
	I0717 10:33:09.507551    3508 start.go:83] releasing machines lock for "ha-572000-m02", held for 13.491465012s
	I0717 10:33:09.507572    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.507699    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:33:09.532514    3508 out.go:177] * Found network options:
	I0717 10:33:09.552891    3508 out.go:177]   - NO_PROXY=192.169.0.5
	W0717 10:33:09.574387    3508 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:33:09.574424    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.575230    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.575434    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.575533    3508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:33:09.575579    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	W0717 10:33:09.575674    3508 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:33:09.575742    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:09.575769    3508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:33:09.575787    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:09.575982    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.576003    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:09.576234    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.576305    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:09.576479    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:09.576483    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:33:09.576596    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	W0717 10:33:09.607732    3508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:33:09.607792    3508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:33:09.656923    3508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:33:09.656940    3508 start.go:495] detecting cgroup driver to use...
	I0717 10:33:09.657029    3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:33:09.673202    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:33:09.682149    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:33:09.691293    3508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:33:09.691348    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:33:09.700430    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:33:09.709231    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:33:09.718168    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:33:09.727036    3508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:33:09.736298    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:33:09.745642    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:33:09.754690    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:33:09.763621    3508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:33:09.771717    3508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:33:09.779861    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:33:09.883183    3508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:33:09.901989    3508 start.go:495] detecting cgroup driver to use...
	I0717 10:33:09.902056    3508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:33:09.919371    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:33:09.932597    3508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:33:09.953462    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:33:09.964583    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:33:09.975437    3508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:33:09.995754    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:33:10.006015    3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:33:10.020825    3508 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:33:10.023692    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:33:10.030648    3508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:33:10.044228    3508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:33:10.141170    3508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:33:10.249186    3508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:33:10.249214    3508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:33:10.263041    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:33:10.359716    3508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:34:11.416224    3508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.021941021s)
	I0717 10:34:11.416300    3508 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0717 10:34:11.450835    3508 out.go:177] 
	W0717 10:34:11.471671    3508 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 17:33:08 ha-572000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.044876852Z" level=info msg="Starting up"
	Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.045449556Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.049003475Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=496
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.064081003Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079222179Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079310364Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079376764Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079411371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079557600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079609621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079752864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079797312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079887739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079928799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.080046807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.080239575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.081923027Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.081977822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082123136Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082166838Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082275842Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082332754Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084273060Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084339651Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084378389Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084411359Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084442922Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084509418Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084664339Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084738339Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084774254Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084804627Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084874943Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084911894Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084942267Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084972768Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085003365Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085032856Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085062302Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085090775Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085129743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085161980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085192066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085224112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085253798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085286177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085315810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085345112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085374976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085410351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085440979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085471089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085500214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085532017Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085571085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085603089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085635203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085683933Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085717630Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085747936Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085777505Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085805608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085834007Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085861655Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086142807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086206245Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086259095Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086322237Z" level=info msg="containerd successfully booted in 0.022994s"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.065923436Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.108166477Z" level=info msg="Loading containers: start."
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.277192209Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.336888641Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.380874805Z" level=info msg="Loading containers: done."
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.387565385Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.387757279Z" level=info msg="Daemon has completed initialization"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.411794010Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.411982753Z" level=info msg="API listen on [::]:2376"
	Jul 17 17:33:09 ha-572000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.490943827Z" level=info msg="Processing signal 'terminated'"
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.491923813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.491997518Z" level=info msg="Daemon shutdown complete"
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.492029261Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.492040420Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 17:33:10 ha-572000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 17:33:11 ha-572000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 17:33:11 ha-572000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 17:33:11 ha-572000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 17:33:11 ha-572000-m02 dockerd[1164]: time="2024-07-17T17:33:11.528450348Z" level=info msg="Starting up"
	Jul 17 17:34:11 ha-572000-m02 dockerd[1164]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 17:34:11 ha-572000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 17:34:11 ha-572000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 17:34:11 ha-572000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0717 10:34:11.471802    3508 out.go:239] * 
	W0717 10:34:11.473037    3508 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:34:11.536857    3508 out.go:177] 
	
	
	==> Docker <==
	Jul 17 17:33:02 ha-572000 dockerd[1178]: time="2024-07-17T17:33:02.455192722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:33:23 ha-572000 dockerd[1178]: time="2024-07-17T17:33:23.740501414Z" level=info msg="shim disconnected" id=a6e0742139d63216679ae18bc09ccda23c0ed0e4d7a419b36e966729969bea9b namespace=moby
	Jul 17 17:33:23 ha-572000 dockerd[1178]: time="2024-07-17T17:33:23.740886535Z" level=warning msg="cleaning up after shim disconnected" id=a6e0742139d63216679ae18bc09ccda23c0ed0e4d7a419b36e966729969bea9b namespace=moby
	Jul 17 17:33:23 ha-572000 dockerd[1178]: time="2024-07-17T17:33:23.741204478Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 17 17:33:23 ha-572000 dockerd[1171]: time="2024-07-17T17:33:23.741723202Z" level=info msg="ignoring event" container=a6e0742139d63216679ae18bc09ccda23c0ed0e4d7a419b36e966729969bea9b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 17:33:24 ha-572000 dockerd[1178]: time="2024-07-17T17:33:24.747049658Z" level=info msg="shim disconnected" id=3f41d1a5d2c920e02b958fc57cddcf6e76eb7e4b56f3544ba5066b682eea0c8d namespace=moby
	Jul 17 17:33:24 ha-572000 dockerd[1178]: time="2024-07-17T17:33:24.747592119Z" level=warning msg="cleaning up after shim disconnected" id=3f41d1a5d2c920e02b958fc57cddcf6e76eb7e4b56f3544ba5066b682eea0c8d namespace=moby
	Jul 17 17:33:24 ha-572000 dockerd[1178]: time="2024-07-17T17:33:24.747636154Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 17 17:33:24 ha-572000 dockerd[1171]: time="2024-07-17T17:33:24.747788453Z" level=info msg="ignoring event" container=3f41d1a5d2c920e02b958fc57cddcf6e76eb7e4b56f3544ba5066b682eea0c8d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 17:33:34 ha-572000 dockerd[1178]: time="2024-07-17T17:33:34.836028865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:33:34 ha-572000 dockerd[1178]: time="2024-07-17T17:33:34.836093957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:33:34 ha-572000 dockerd[1178]: time="2024-07-17T17:33:34.836105101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:33:34 ha-572000 dockerd[1178]: time="2024-07-17T17:33:34.836225522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:33:38 ha-572000 dockerd[1178]: time="2024-07-17T17:33:38.652806846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:33:38 ha-572000 dockerd[1178]: time="2024-07-17T17:33:38.652893670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:33:38 ha-572000 dockerd[1178]: time="2024-07-17T17:33:38.652906541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:33:38 ha-572000 dockerd[1178]: time="2024-07-17T17:33:38.657845113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:33:59 ha-572000 dockerd[1171]: time="2024-07-17T17:33:59.069677227Z" level=info msg="ignoring event" container=8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 17:33:59 ha-572000 dockerd[1178]: time="2024-07-17T17:33:59.071115848Z" level=info msg="shim disconnected" id=8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f namespace=moby
	Jul 17 17:33:59 ha-572000 dockerd[1178]: time="2024-07-17T17:33:59.071609934Z" level=warning msg="cleaning up after shim disconnected" id=8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f namespace=moby
	Jul 17 17:33:59 ha-572000 dockerd[1178]: time="2024-07-17T17:33:59.071768605Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 17 17:34:00 ha-572000 dockerd[1171]: time="2024-07-17T17:34:00.079691666Z" level=info msg="ignoring event" container=1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 17:34:00 ha-572000 dockerd[1178]: time="2024-07-17T17:34:00.081342846Z" level=info msg="shim disconnected" id=1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca namespace=moby
	Jul 17 17:34:00 ha-572000 dockerd[1178]: time="2024-07-17T17:34:00.081524291Z" level=warning msg="cleaning up after shim disconnected" id=1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca namespace=moby
	Jul 17 17:34:00 ha-572000 dockerd[1178]: time="2024-07-17T17:34:00.081549356Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8f09c09ed996a       56ce0fd9fb532                                                                                         37 seconds ago       Exited              kube-apiserver            2                   6d7eb0e874999       kube-apiserver-ha-572000
	1e8f9939826f4       e874818b3caac                                                                                         41 seconds ago       Exited              kube-controller-manager   2                   b7d58c526c444       kube-controller-manager-ha-572000
	138bf6784d59c       38af8ddebf499                                                                                         About a minute ago   Running             kube-vip                  0                   df04438a4c5cc       kube-vip-ha-572000
	a53f8fcdf5d97       7820c83aa1394                                                                                         About a minute ago   Running             kube-scheduler            1                   bfd880612991e       kube-scheduler-ha-572000
	a3398a8ca33aa       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      1                   986ceb5a6f870       etcd-ha-572000
	e1a5eb1bed550       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   3 minutes ago        Exited              busybox                   0                   29ab413131af2       busybox-fc5497c4f-5r4wl
	bb44d784bb7ab       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   b8c622f08395f       coredns-7db6d8ff4d-2phrp
	7b275812468c9       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   2588bd7c40c23       coredns-7db6d8ff4d-9dzd5
	12ba2e181ee9a       6e38f40d628db                                                                                         6 minutes ago        Exited              storage-provisioner       0                   04b7cdcbedf20       storage-provisioner
	6e40e1427ab20       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              6 minutes ago        Exited              kindnet-cni               0                   32dc836c0a2df       kindnet-t85bv
	2aeed19835352       53c535741fb44                                                                                         6 minutes ago        Exited              kube-proxy                0                   f688e08d591be       kube-proxy-hst7h
	9200160f355ce       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago        Exited              kube-vip                  0                   1742f4f388abf       kube-vip-ha-572000
	e29f4fe295c1c       7820c83aa1394                                                                                         6 minutes ago        Exited              kube-scheduler            0                   25d825604d9f6       kube-scheduler-ha-572000
	c6527d620dad2       3861cfcd7c04c                                                                                         6 minutes ago        Exited              etcd                      0                   8844aab508d79       etcd-ha-572000
	
	
	==> coredns [7b275812468c] <==
	[INFO] 10.244.0.4:49035 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001628883s
	[INFO] 10.244.0.4:59665 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081818s
	[INFO] 10.244.0.4:52274 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077378s
	[INFO] 10.244.1.2:47113 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103907s
	[INFO] 10.244.2.2:57796 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060561s
	[INFO] 10.244.2.2:40339 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000425199s
	[INFO] 10.244.2.2:38339 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010468s
	[INFO] 10.244.2.2:50750 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070547s
	[INFO] 10.244.0.4:57426 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000045039s
	[INFO] 10.244.0.4:44283 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031195s
	[INFO] 10.244.1.2:56783 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117359s
	[INFO] 10.244.1.2:34315 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061358s
	[INFO] 10.244.1.2:55792 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062223s
	[INFO] 10.244.2.2:35768 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086426s
	[INFO] 10.244.2.2:42473 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102022s
	[INFO] 10.244.0.4:53524 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000087177s
	[INFO] 10.244.0.4:35224 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079092s
	[INFO] 10.244.0.4:53020 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075585s
	[INFO] 10.244.1.2:58074 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138293s
	[INFO] 10.244.1.2:40567 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000088337s
	[INFO] 10.244.1.2:46301 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000065328s
	[INFO] 10.244.1.2:59898 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075157s
	[INFO] 10.244.2.2:48754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000081888s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bb44d784bb7a] <==
	[INFO] 10.244.0.4:54052 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001019206s
	[INFO] 10.244.0.4:45412 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077905s
	[INFO] 10.244.0.4:37924 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159382s
	[INFO] 10.244.1.2:45862 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108196s
	[INFO] 10.244.1.2:47464 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000616154s
	[INFO] 10.244.1.2:53720 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059338s
	[INFO] 10.244.1.2:49774 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123053s
	[INFO] 10.244.1.2:54472 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000097287s
	[INFO] 10.244.1.2:52054 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064095s
	[INFO] 10.244.1.2:54428 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064142s
	[INFO] 10.244.2.2:53088 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109393s
	[INFO] 10.244.2.2:49993 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000114279s
	[INFO] 10.244.2.2:42708 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063661s
	[INFO] 10.244.2.2:57150 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000043434s
	[INFO] 10.244.0.4:45987 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112397s
	[INFO] 10.244.0.4:44116 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057797s
	[INFO] 10.244.1.2:37635 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105428s
	[INFO] 10.244.2.2:52081 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131248s
	[INFO] 10.244.2.2:54912 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087858s
	[INFO] 10.244.0.4:53981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079367s
	[INFO] 10.244.2.2:45850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152036s
	[INFO] 10.244.2.2:36004 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000223s
	[INFO] 10.244.2.2:54459 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000128834s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0717 17:34:15.981303    2751 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0717 17:34:15.982377    2751 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0717 17:34:15.983100    2751 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0717 17:34:15.985057    2751 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0717 17:34:15.985709    2751 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006777] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.574354] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +1.320177] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.823982] systemd-fstab-generator[476]: Ignoring "noauto" option for root device
	[  +0.112634] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
	[  +1.921936] systemd-fstab-generator[1101]: Ignoring "noauto" option for root device
	[  +0.055582] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.184169] systemd-fstab-generator[1137]: Ignoring "noauto" option for root device
	[  +0.104772] systemd-fstab-generator[1149]: Ignoring "noauto" option for root device
	[  +0.113810] systemd-fstab-generator[1163]: Ignoring "noauto" option for root device
	[  +2.482664] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.099717] systemd-fstab-generator[1391]: Ignoring "noauto" option for root device
	[  +0.099983] systemd-fstab-generator[1403]: Ignoring "noauto" option for root device
	[  +0.118727] systemd-fstab-generator[1418]: Ignoring "noauto" option for root device
	[  +0.437794] systemd-fstab-generator[1575]: Ignoring "noauto" option for root device
	[Jul17 17:33] kauditd_printk_skb: 234 callbacks suppressed
	[ +21.575073] kauditd_printk_skb: 40 callbacks suppressed
	[ +31.253030] clocksource: timekeeping watchdog on CPU1: Marking clocksource 'tsc' as unstable because the skew is too large:
	[  +0.000025] clocksource:                       'hpet' wd_now: 2db2a3c3 wd_last: 2d0e4271 mask: ffffffff
	[  +0.000022] clocksource:                       'tsc' cs_now: 5d6b30d2ea8 cs_last: 5d5e653cfb0 mask: ffffffffffffffff
	[  +0.001528] TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'.
	[  +0.002348] clocksource: Checking clocksource tsc synchronization from CPU 0.
	
	
	==> etcd [a3398a8ca33a] <==
	{"level":"info","ts":"2024-07-17T17:34:09.88953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:09.890409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:09.890542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:11.580196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:11.580274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:11.580332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:11.580874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:11.58095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"warn","ts":"2024-07-17T17:34:12.931535Z","caller":"etcdserver/server.go:2089","msg":"failed to publish local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-572000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-07-17T17:34:12.933019Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1d3f36ee75516151","rtt":"0s","error":"dial tcp 192.169.0.7:2380: i/o timeout"}
	{"level":"warn","ts":"2024-07-17T17:34:12.933094Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f6a6d94fe6ea5bc8","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:34:12.934267Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f6a6d94fe6ea5bc8","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:34:12.934292Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1d3f36ee75516151","rtt":"0s","error":"dial tcp 192.169.0.7:2380: i/o timeout"}
	{"level":"info","ts":"2024-07-17T17:34:13.279569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:13.279609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:13.279634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:13.279658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:13.279676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:14.979641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:14.979698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:14.979724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:14.979755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:14.979817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"warn","ts":"2024-07-17T17:34:15.896192Z","caller":"etcdhttp/health.go:232","msg":"serving /health false; no leader"}
	{"level":"warn","ts":"2024-07-17T17:34:15.896261Z","caller":"etcdhttp/health.go:119","msg":"/health error","output":"{\"health\":\"false\",\"reason\":\"RAFT NO LEADER\"}","status-code":503}
	
	
	==> etcd [c6527d620dad] <==
	{"level":"warn","ts":"2024-07-17T17:32:29.48769Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T17:32:24.462555Z","time spent":"5.025134128s","remote":"127.0.0.1:36734","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/07/17 17:32:29 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T17:32:29.48774Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T17:32:25.149839Z","time spent":"4.337900582s","remote":"127.0.0.1:45174","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/07/17 17:32:29 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T17:32:29.512674Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T17:32:29.512703Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T17:32:29.512731Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"warn","ts":"2024-07-17T17:32:29.512821Z","caller":"etcdserver/server.go:1165","msg":"failed to revoke lease","lease-id":"584490c1bc074071","error":"etcdserver: request cancelled"}
	{"level":"info","ts":"2024-07-17T17:32:29.512836Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.512844Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.512857Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.512905Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.512927Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.512948Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.512956Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.51296Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.512966Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.512977Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.513753Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.513778Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.516864Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.516891Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.518343Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-07-17T17:32:29.51839Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-07-17T17:32:29.518397Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-572000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 17:34:16 up 1 min,  0 users,  load average: 0.09, 0.05, 0.01
	Linux ha-572000 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6e40e1427ab2] <==
	I0717 17:31:56.892269       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:06.898213       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:06.898253       1 main.go:303] handling current node
	I0717 17:32:06.898264       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:06.898269       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:06.898416       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:06.898443       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:06.898526       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:06.898555       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:16.896377       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:16.896415       1 main.go:303] handling current node
	I0717 17:32:16.896426       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:16.896432       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:16.896606       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:16.896636       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:16.896674       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:16.896699       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:26.896557       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:26.896622       1 main.go:303] handling current node
	I0717 17:32:26.896678       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:26.896718       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:26.896938       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:26.897017       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:26.897158       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:26.897880       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8f09c09ed996] <==
	I0717 17:33:38.766324       1 options.go:221] external host was not specified, using 192.169.0.5
	I0717 17:33:38.766955       1 server.go:148] Version: v1.30.2
	I0717 17:33:38.767101       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:33:39.044188       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0717 17:33:39.046954       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 17:33:39.049409       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0717 17:33:39.049435       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0717 17:33:39.049563       1 instance.go:299] Using reconciler: lease
	W0717 17:33:59.045294       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0717 17:33:59.045986       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0717 17:33:59.051243       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0717 17:33:59.051294       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [1e8f9939826f] <==
	I0717 17:33:35.199426       1 serving.go:380] Generated self-signed cert in-memory
	I0717 17:33:35.611724       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0717 17:33:35.611860       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:33:35.612992       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 17:33:35.613172       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0717 17:33:35.613294       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0717 17:33:35.613433       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0717 17:34:00.060238       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:34220->192.169.0.5:8443: read: connection reset by peer"
	
	
	==> kube-proxy [2aeed1983535] <==
	I0717 17:27:43.315695       1 server_linux.go:69] "Using iptables proxy"
	I0717 17:27:43.322673       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0717 17:27:43.354011       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:27:43.354032       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:27:43.354044       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:27:43.355997       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:27:43.356216       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:27:43.356225       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:27:43.356874       1 config.go:192] "Starting service config controller"
	I0717 17:27:43.356903       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:27:43.356921       1 config.go:319] "Starting node config controller"
	I0717 17:27:43.356943       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:27:43.357077       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:27:43.357144       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:27:43.457513       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 17:27:43.457607       1 shared_informer.go:320] Caches are synced for node config
	I0717 17:27:43.457639       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [a53f8fcdf5d9] <==
	E0717 17:34:00.060786       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59664->192.169.0.5:8443: read: connection reset by peer
	W0717 17:34:07.801237       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:07.801736       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:08.253794       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:08.253923       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:08.255527       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:08.255685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:09.119507       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:09.120276       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:09.844542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:09.845089       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:11.002782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:11.003425       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:11.004959       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:11.005426       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:11.724425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:11.724599       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:13.328905       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:13.329009       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:13.526537       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:13.526638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:14.532488       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:14.533163       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:14.949017       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:14.949354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	
	
	==> kube-scheduler [e29f4fe295c1] <==
	W0717 17:27:25.906822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 17:27:25.906857       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 17:27:25.906870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 17:27:25.906912       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 17:27:26.715070       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 17:27:26.715127       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 17:27:26.797242       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 17:27:26.797298       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 17:27:26.957071       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 17:27:26.957111       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 17:27:27.013148       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 17:27:27.013190       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 17:27:29.895450       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 17:30:13.328557       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-jhz2d\": pod busybox-fc5497c4f-jhz2d is already assigned to node \"ha-572000-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-jhz2d" node="ha-572000-m03"
	E0717 17:30:13.329015       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2f9e6064-727c-486c-b925-3ce5866e42ff(default/busybox-fc5497c4f-jhz2d) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-jhz2d"
	E0717 17:30:13.329121       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-jhz2d\": pod busybox-fc5497c4f-jhz2d is already assigned to node \"ha-572000-m03\"" pod="default/busybox-fc5497c4f-jhz2d"
	I0717 17:30:13.329256       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-jhz2d" node="ha-572000-m03"
	E0717 17:30:13.362412       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-zwhws\": pod busybox-fc5497c4f-zwhws is already assigned to node \"ha-572000\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-zwhws" node="ha-572000"
	E0717 17:30:13.362474       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-zwhws\": pod busybox-fc5497c4f-zwhws is already assigned to node \"ha-572000\"" pod="default/busybox-fc5497c4f-zwhws"
	E0717 17:30:13.441720       1 schedule_one.go:1067] "Error occurred" err="Pod default/busybox-fc5497c4f-l7sqr is already present in the active queue" pod="default/busybox-fc5497c4f-l7sqr"
	E0717 17:30:39.870609       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5wcph\": pod kube-proxy-5wcph is already assigned to node \"ha-572000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5wcph" node="ha-572000-m04"
	E0717 17:30:39.870661       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 731f5b57-131e-4e97-b47a-036b8d4edbcd(kube-system/kube-proxy-5wcph) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5wcph"
	E0717 17:30:39.870672       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5wcph\": pod kube-proxy-5wcph is already assigned to node \"ha-572000-m04\"" pod="kube-system/kube-proxy-5wcph"
	I0717 17:30:39.870686       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5wcph" node="ha-572000-m04"
	E0717 17:32:29.355082       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 17 17:34:00 ha-572000 kubelet[1582]: E0717 17:34:00.130076    1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-572000_kube-system(967ef612fac8855d0ef3892ca2ae35cc)\"" pod="kube-system/kube-controller-manager-ha-572000" podUID="967ef612fac8855d0ef3892ca2ae35cc"
	Jul 17 17:34:00 ha-572000 kubelet[1582]: I0717 17:34:00.145332    1582 scope.go:117] "RemoveContainer" containerID="8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f"
	Jul 17 17:34:00 ha-572000 kubelet[1582]: E0717 17:34:00.145681    1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-572000_kube-system(2508736da9632d6a9c9aaf9c250b4f65)\"" pod="kube-system/kube-apiserver-ha-572000" podUID="2508736da9632d6a9c9aaf9c250b4f65"
	Jul 17 17:34:00 ha-572000 kubelet[1582]: I0717 17:34:00.150461    1582 scope.go:117] "RemoveContainer" containerID="a6e0742139d63216679ae18bc09ccda23c0ed0e4d7a419b36e966729969bea9b"
	Jul 17 17:34:00 ha-572000 kubelet[1582]: I0717 17:34:00.925798    1582 kubelet_node_status.go:73] "Attempting to register node" node="ha-572000"
	Jul 17 17:34:03 ha-572000 kubelet[1582]: E0717 17:34:03.198768    1582 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-572000"
	Jul 17 17:34:03 ha-572000 kubelet[1582]: E0717 17:34:03.199285    1582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-572000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Jul 17 17:34:03 ha-572000 kubelet[1582]: I0717 17:34:03.360896    1582 scope.go:117] "RemoveContainer" containerID="1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca"
	Jul 17 17:34:03 ha-572000 kubelet[1582]: E0717 17:34:03.361398    1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-572000_kube-system(967ef612fac8855d0ef3892ca2ae35cc)\"" pod="kube-system/kube-controller-manager-ha-572000" podUID="967ef612fac8855d0ef3892ca2ae35cc"
	Jul 17 17:34:04 ha-572000 kubelet[1582]: I0717 17:34:04.792263    1582 scope.go:117] "RemoveContainer" containerID="1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca"
	Jul 17 17:34:04 ha-572000 kubelet[1582]: E0717 17:34:04.792672    1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-572000_kube-system(967ef612fac8855d0ef3892ca2ae35cc)\"" pod="kube-system/kube-controller-manager-ha-572000" podUID="967ef612fac8855d0ef3892ca2ae35cc"
	Jul 17 17:34:05 ha-572000 kubelet[1582]: I0717 17:34:05.369319    1582 scope.go:117] "RemoveContainer" containerID="8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f"
	Jul 17 17:34:05 ha-572000 kubelet[1582]: E0717 17:34:05.369956    1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-572000_kube-system(2508736da9632d6a9c9aaf9c250b4f65)\"" pod="kube-system/kube-apiserver-ha-572000" podUID="2508736da9632d6a9c9aaf9c250b4f65"
	Jul 17 17:34:05 ha-572000 kubelet[1582]: E0717 17:34:05.660481    1582 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-572000\" not found"
	Jul 17 17:34:08 ha-572000 kubelet[1582]: I0717 17:34:08.082261    1582 scope.go:117] "RemoveContainer" containerID="8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f"
	Jul 17 17:34:08 ha-572000 kubelet[1582]: E0717 17:34:08.082649    1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-572000_kube-system(2508736da9632d6a9c9aaf9c250b4f65)\"" pod="kube-system/kube-apiserver-ha-572000" podUID="2508736da9632d6a9c9aaf9c250b4f65"
	Jul 17 17:34:09 ha-572000 kubelet[1582]: E0717 17:34:09.342982    1582 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-572000.17e310749989a167  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-572000,UID:ha-572000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-572000,},FirstTimestamp:2024-07-17 17:32:55.563845991 +0000 UTC m=+0.198827444,LastTimestamp:2024-07-17 17:32:55.563845991 +0000 UTC m=+0.198827444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-572000,}"
	Jul 17 17:34:10 ha-572000 kubelet[1582]: I0717 17:34:10.207057    1582 kubelet_node_status.go:73] "Attempting to register node" node="ha-572000"
	Jul 17 17:34:12 ha-572000 kubelet[1582]: E0717 17:34:12.418909    1582 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-572000"
	Jul 17 17:34:12 ha-572000 kubelet[1582]: E0717 17:34:12.419039    1582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-572000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Jul 17 17:34:15 ha-572000 kubelet[1582]: W0717 17:34:15.486570    1582 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Jul 17 17:34:15 ha-572000 kubelet[1582]: E0717 17:34:15.486678    1582 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Jul 17 17:34:15 ha-572000 kubelet[1582]: E0717 17:34:15.662153    1582 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-572000\" not found"
	Jul 17 17:34:16 ha-572000 kubelet[1582]: I0717 17:34:16.604071    1582 scope.go:117] "RemoveContainer" containerID="1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca"
	Jul 17 17:34:16 ha-572000 kubelet[1582]: E0717 17:34:16.604626    1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-572000_kube-system(967ef612fac8855d0ef3892ca2ae35cc)\"" pod="kube-system/kube-controller-manager-ha-572000" podUID="967ef612fac8855d0ef3892ca2ae35cc"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-572000 -n ha-572000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-572000 -n ha-572000: exit status 2 (159.442735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-572000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (3.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (16.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:413: expected profile "ha-572000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-572000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-572000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-572000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Ku
bernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\"
:false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP
\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-572000 -n ha-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-572000 -n ha-572000: exit status 2 (157.184156ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 logs -n 25
E0717 10:34:17.744237    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-572000 logs -n 25: (2.309035427s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m02 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m03_ha-572000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m03:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04:/home/docker/cp-test_ha-572000-m03_ha-572000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m04 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m03_ha-572000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-572000 cp testdata/cp-test.txt                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3497274976/001/cp-test_ha-572000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000:/home/docker/cp-test_ha-572000-m04_ha-572000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000 sudo cat                                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m02:/home/docker/cp-test_ha-572000-m04_ha-572000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m02 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m03:/home/docker/cp-test_ha-572000-m04_ha-572000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m03 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-572000 node stop m02 -v=7                                                                                                 | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-572000 node start m02 -v=7                                                                                                | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:32 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-572000 -v=7                                                                                                       | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-572000 -v=7                                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT | 17 Jul 24 10:32 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-572000 --wait=true -v=7                                                                                                | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-572000                                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:34 PDT |                     |
	| node    | ha-572000 node delete m03 -v=7                                                                                               | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:34 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 10:32:37
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 10:32:37.218202    3508 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:32:37.218482    3508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:32:37.218488    3508 out.go:304] Setting ErrFile to fd 2...
	I0717 10:32:37.218492    3508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:32:37.218678    3508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	I0717 10:32:37.220111    3508 out.go:298] Setting JSON to false
	I0717 10:32:37.243881    3508 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1928,"bootTime":1721235629,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0717 10:32:37.243971    3508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:32:37.265852    3508 out.go:177] * [ha-572000] minikube v1.33.1 on Darwin 14.5
	I0717 10:32:37.307717    3508 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:32:37.307783    3508 notify.go:220] Checking for updates...
	I0717 10:32:37.352082    3508 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:32:37.394723    3508 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 10:32:37.416561    3508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:32:37.437566    3508 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	I0717 10:32:37.458758    3508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:32:37.480259    3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:32:37.480391    3508 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:32:37.481074    3508 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:32:37.481147    3508 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:32:37.491120    3508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51745
	I0717 10:32:37.491492    3508 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:32:37.491919    3508 main.go:141] libmachine: Using API Version  1
	I0717 10:32:37.491928    3508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:32:37.492189    3508 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:32:37.492307    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:37.520549    3508 out.go:177] * Using the hyperkit driver based on existing profile
	I0717 10:32:37.563535    3508 start.go:297] selected driver: hyperkit
	I0717 10:32:37.563555    3508 start.go:901] validating driver "hyperkit" against &{Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:32:37.563770    3508 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:32:37.563903    3508 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:32:37.564063    3508 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19283-1099/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0717 10:32:37.572774    3508 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0717 10:32:37.578697    3508 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:32:37.578722    3508 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0717 10:32:37.582004    3508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:32:37.582058    3508 cni.go:84] Creating CNI manager for ""
	I0717 10:32:37.582066    3508 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 10:32:37.582150    3508 start.go:340] cluster config:
	{Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:32:37.582277    3508 iso.go:125] acquiring lock: {Name:mkf51f842bcc8a77e9c7c50d642c4c76848e96af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:32:37.624644    3508 out.go:177] * Starting "ha-572000" primary control-plane node in "ha-572000" cluster
	I0717 10:32:37.645662    3508 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:32:37.645750    3508 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0717 10:32:37.645778    3508 cache.go:56] Caching tarball of preloaded images
	I0717 10:32:37.645983    3508 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:32:37.646002    3508 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:32:37.646175    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:32:37.647084    3508 start.go:360] acquireMachinesLock for ha-572000: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:32:37.647209    3508 start.go:364] duration metric: took 99.885µs to acquireMachinesLock for "ha-572000"
	I0717 10:32:37.647240    3508 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:32:37.647261    3508 fix.go:54] fixHost starting: 
	I0717 10:32:37.647673    3508 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:32:37.647700    3508 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:32:37.656651    3508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51747
	I0717 10:32:37.657021    3508 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:32:37.657336    3508 main.go:141] libmachine: Using API Version  1
	I0717 10:32:37.657346    3508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:32:37.657590    3508 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:32:37.657719    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:37.657832    3508 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:32:37.657936    3508 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:37.658021    3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 2926
	I0717 10:32:37.658989    3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 2926 missing from process table
	I0717 10:32:37.658986    3508 fix.go:112] recreateIfNeeded on ha-572000: state=Stopped err=<nil>
	I0717 10:32:37.659004    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	W0717 10:32:37.659109    3508 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:32:37.701727    3508 out.go:177] * Restarting existing hyperkit VM for "ha-572000" ...
	I0717 10:32:37.722485    3508 main.go:141] libmachine: (ha-572000) Calling .Start
	I0717 10:32:37.722730    3508 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:37.722799    3508 main.go:141] libmachine: (ha-572000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid
	I0717 10:32:37.724830    3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 2926 missing from process table
	I0717 10:32:37.724872    3508 main.go:141] libmachine: (ha-572000) DBG | pid 2926 is in state "Stopped"
	I0717 10:32:37.724889    3508 main.go:141] libmachine: (ha-572000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid...
	I0717 10:32:37.725226    3508 main.go:141] libmachine: (ha-572000) DBG | Using UUID 5f2666de-0b32-4258-9840-7856c1bd4173
	I0717 10:32:37.837447    3508 main.go:141] libmachine: (ha-572000) DBG | Generated MAC d2:a6:10:ad:80:98
	I0717 10:32:37.837476    3508 main.go:141] libmachine: (ha-572000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:32:37.837593    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f2666de-0b32-4258-9840-7856c1bd4173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bc9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:32:37.837631    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f2666de-0b32-4258-9840-7856c1bd4173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bc9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:32:37.837679    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5f2666de-0b32-4258-9840-7856c1bd4173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/ha-572000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:32:37.837720    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5f2666de-0b32-4258-9840-7856c1bd4173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/ha-572000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:32:37.837736    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:32:37.839166    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Pid is 3521
	I0717 10:32:37.839653    3508 main.go:141] libmachine: (ha-572000) DBG | Attempt 0
	I0717 10:32:37.839674    3508 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:37.839714    3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3521
	I0717 10:32:37.841412    3508 main.go:141] libmachine: (ha-572000) DBG | Searching for d2:a6:10:ad:80:98 in /var/db/dhcpd_leases ...
	I0717 10:32:37.841498    3508 main.go:141] libmachine: (ha-572000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:32:37.841515    3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:37:45:6a:f1:7f ID:1,1e:37:45:6a:f1:7f Lease:0x6698001b}
	I0717 10:32:37.841527    3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x6699517a}
	I0717 10:32:37.841536    3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:d3:62:da:43:cf ID:1,6e:d3:62:da:43:cf Lease:0x669950e4}
	I0717 10:32:37.841559    3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x66994ff6}
	I0717 10:32:37.841570    3508 main.go:141] libmachine: (ha-572000) DBG | Found match: d2:a6:10:ad:80:98
	I0717 10:32:37.841595    3508 main.go:141] libmachine: (ha-572000) DBG | IP: 192.169.0.5
	I0717 10:32:37.841705    3508 main.go:141] libmachine: (ha-572000) Calling .GetConfigRaw
	I0717 10:32:37.842357    3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:32:37.842580    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:32:37.843052    3508 machine.go:94] provisionDockerMachine start ...
	I0717 10:32:37.843065    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:37.843201    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:37.843303    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:37.843420    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:37.843572    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:37.843663    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:37.843791    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:37.844002    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:37.844014    3508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:32:37.847060    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:32:37.898878    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:32:37.899633    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:32:37.899658    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:32:37.899668    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:32:37.899678    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:32:38.277909    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:32:38.277922    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:32:38.392613    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:32:38.392633    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:32:38.392644    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:32:38.392676    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:32:38.393519    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:32:38.393530    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:32:43.648108    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:32:43.648154    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:32:43.648161    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:32:43.672680    3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:32:48.904402    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:32:48.904418    3508 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:32:48.904582    3508 buildroot.go:166] provisioning hostname "ha-572000"
	I0717 10:32:48.904593    3508 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:32:48.904692    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:48.904776    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:48.904887    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:48.904976    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:48.905073    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:48.905225    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:48.905383    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:48.905392    3508 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000 && echo "ha-572000" | sudo tee /etc/hostname
	I0717 10:32:48.967564    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000
	
	I0717 10:32:48.967584    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:48.967740    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:48.967836    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:48.967934    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:48.968014    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:48.968132    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:48.968282    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:48.968293    3508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:32:49.026313    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:32:49.026336    3508 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:32:49.026353    3508 buildroot.go:174] setting up certificates
	I0717 10:32:49.026367    3508 provision.go:84] configureAuth start
	I0717 10:32:49.026375    3508 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:32:49.026507    3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:32:49.026613    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:49.026706    3508 provision.go:143] copyHostCerts
	I0717 10:32:49.026741    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:32:49.026811    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:32:49.026819    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:32:49.026972    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:32:49.027200    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:32:49.027231    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:32:49.027236    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:32:49.027325    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:32:49.027487    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:32:49.027519    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:32:49.027524    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:32:49.027590    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:32:49.027748    3508 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000 san=[127.0.0.1 192.169.0.5 ha-572000 localhost minikube]
	I0717 10:32:49.085766    3508 provision.go:177] copyRemoteCerts
	I0717 10:32:49.085812    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:32:49.085827    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:49.086112    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:49.086217    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.086305    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:49.086395    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:32:49.120573    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:32:49.120648    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:32:49.139510    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:32:49.139585    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0717 10:32:49.158247    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:32:49.158317    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:32:49.177520    3508 provision.go:87] duration metric: took 151.137832ms to configureAuth
	I0717 10:32:49.177532    3508 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:32:49.177693    3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:32:49.177706    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:49.177837    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:49.177945    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:49.178031    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.178106    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.178195    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:49.178315    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:49.178439    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:49.178454    3508 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:32:49.231928    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:32:49.231939    3508 buildroot.go:70] root file system type: tmpfs
	I0717 10:32:49.232011    3508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:32:49.232025    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:49.232158    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:49.232247    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.232341    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.232427    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:49.232563    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:49.232710    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:49.232755    3508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:32:49.295280    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:32:49.295308    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:49.295446    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:49.295550    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.295637    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:49.295723    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:49.295852    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:49.295991    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:49.296003    3508 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:32:50.972633    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:32:50.972648    3508 machine.go:97] duration metric: took 13.129388483s to provisionDockerMachine
	I0717 10:32:50.972660    3508 start.go:293] postStartSetup for "ha-572000" (driver="hyperkit")
	I0717 10:32:50.972668    3508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:32:50.972678    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:50.972893    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:32:50.972908    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:50.973007    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:50.973108    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:50.973193    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:50.973281    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:32:51.011765    3508 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:32:51.016752    3508 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:32:51.016768    3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:32:51.016865    3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:32:51.017004    3508 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:32:51.017011    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:32:51.017179    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:32:51.027779    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:32:51.057568    3508 start.go:296] duration metric: took 84.89741ms for postStartSetup
	I0717 10:32:51.057590    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:51.057768    3508 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:32:51.057780    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:51.057871    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:51.057953    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:51.058038    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:51.058120    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:32:51.090670    3508 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:32:51.090728    3508 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:32:51.124190    3508 fix.go:56] duration metric: took 13.476731728s for fixHost
	I0717 10:32:51.124211    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:51.124344    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:51.124460    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:51.124556    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:51.124646    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:51.124769    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:51.124925    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:32:51.124933    3508 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:32:51.178019    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237571.303332168
	
	I0717 10:32:51.178031    3508 fix.go:216] guest clock: 1721237571.303332168
	I0717 10:32:51.178046    3508 fix.go:229] Guest: 2024-07-17 10:32:51.303332168 -0700 PDT Remote: 2024-07-17 10:32:51.124202 -0700 PDT m=+13.941974821 (delta=179.130168ms)
	I0717 10:32:51.178065    3508 fix.go:200] guest clock delta is within tolerance: 179.130168ms
	I0717 10:32:51.178069    3508 start.go:83] releasing machines lock for "ha-572000", held for 13.530645229s
	I0717 10:32:51.178090    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:51.178220    3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:32:51.178321    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:51.178658    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:51.178764    3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:32:51.178848    3508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:32:51.178881    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:51.178898    3508 ssh_runner.go:195] Run: cat /version.json
	I0717 10:32:51.178911    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:32:51.178978    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:51.179001    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:32:51.179061    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:51.179087    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:32:51.179158    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:51.179178    3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:32:51.179272    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:32:51.179286    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:32:51.214891    3508 ssh_runner.go:195] Run: systemctl --version
	I0717 10:32:51.259994    3508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 10:32:51.264962    3508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:32:51.265002    3508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:32:51.277704    3508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:32:51.277717    3508 start.go:495] detecting cgroup driver to use...
	I0717 10:32:51.277809    3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:32:51.295436    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:32:51.304332    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:32:51.313061    3508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:32:51.313115    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:32:51.321793    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:32:51.330506    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:32:51.339262    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:32:51.347997    3508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:32:51.356934    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:32:51.365798    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:32:51.374520    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:32:51.383330    3508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:32:51.391096    3508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:32:51.398988    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:32:51.492043    3508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:32:51.510670    3508 start.go:495] detecting cgroup driver to use...
	I0717 10:32:51.510748    3508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:32:51.522109    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:32:51.533578    3508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:32:51.547583    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:32:51.558324    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:32:51.568495    3508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:32:51.586295    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:32:51.596174    3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:32:51.611388    3508 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:32:51.614154    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:32:51.621515    3508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:32:51.636315    3508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:32:51.730805    3508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:32:51.833325    3508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:32:51.833396    3508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:32:51.849329    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:32:51.950120    3508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:32:54.304256    3508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.354082061s)
	I0717 10:32:54.304312    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:32:54.314507    3508 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 10:32:54.327160    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:32:54.337277    3508 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:32:54.428967    3508 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:32:54.528124    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:32:54.629785    3508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:32:54.644492    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:32:54.655322    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:32:54.750191    3508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:32:54.814687    3508 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:32:54.814779    3508 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:32:54.819517    3508 start.go:563] Will wait 60s for crictl version
	I0717 10:32:54.819571    3508 ssh_runner.go:195] Run: which crictl
	I0717 10:32:54.823230    3508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:32:54.848640    3508 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:32:54.848713    3508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:32:54.866198    3508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:32:54.925410    3508 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:32:54.925479    3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:32:54.925865    3508 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:32:54.930367    3508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:32:54.939983    3508 kubeadm.go:883] updating cluster {Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 10:32:54.940088    3508 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:32:54.940151    3508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 10:32:54.953243    3508 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0717 10:32:54.953256    3508 docker.go:615] Images already preloaded, skipping extraction
	I0717 10:32:54.953343    3508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 10:32:54.966247    3508 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0717 10:32:54.966267    3508 cache_images.go:84] Images are preloaded, skipping loading
	I0717 10:32:54.966280    3508 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.30.2 docker true true} ...
	I0717 10:32:54.966352    3508 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:32:54.966420    3508 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 10:32:54.987201    3508 cni.go:84] Creating CNI manager for ""
	I0717 10:32:54.987214    3508 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 10:32:54.987234    3508 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 10:32:54.987251    3508 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-572000 NodeName:ha-572000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 10:32:54.987337    3508 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-572000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 10:32:54.987354    3508 kube-vip.go:115] generating kube-vip config ...
	I0717 10:32:54.987400    3508 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 10:32:54.999700    3508 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 10:32:54.999787    3508 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 10:32:54.999838    3508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:32:55.007455    3508 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:32:55.007500    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 10:32:55.014894    3508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0717 10:32:55.028112    3508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:32:55.043389    3508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0717 10:32:55.057830    3508 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0717 10:32:55.071316    3508 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0717 10:32:55.074184    3508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:32:55.083466    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:32:55.183439    3508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:32:55.197167    3508 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.5
	I0717 10:32:55.197180    3508 certs.go:194] generating shared ca certs ...
	I0717 10:32:55.197190    3508 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.197338    3508 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:32:55.197396    3508 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:32:55.197406    3508 certs.go:256] generating profile certs ...
	I0717 10:32:55.197495    3508 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key
	I0717 10:32:55.197518    3508 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7
	I0717 10:32:55.197535    3508 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0717 10:32:55.361955    3508 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7 ...
	I0717 10:32:55.361972    3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7: {Name:mk29664a7594975eea689d2f8ed48fdc71e62969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.362392    3508 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7 ...
	I0717 10:32:55.362403    3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7: {Name:mk57740b7d279f3d01c1e4241799a0ef5b1e79c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.362628    3508 certs.go:381] copying /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7 -> /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt
	I0717 10:32:55.362825    3508 certs.go:385] copying /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7 -> /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key
	I0717 10:32:55.363038    3508 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key
	I0717 10:32:55.363048    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:32:55.363071    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:32:55.363089    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:32:55.363110    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:32:55.363127    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 10:32:55.363144    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 10:32:55.363163    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 10:32:55.363191    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 10:32:55.363269    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:32:55.363307    3508 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:32:55.363315    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:32:55.363344    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:32:55.363373    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:32:55.363400    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:32:55.363474    3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:32:55.363509    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:32:55.363530    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:32:55.363548    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:32:55.363978    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:32:55.392580    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:32:55.424360    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:32:55.448923    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:32:55.478217    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 10:32:55.513430    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 10:32:55.570074    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 10:32:55.603052    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 10:32:55.623021    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:32:55.641658    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:32:55.661447    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:32:55.681020    3508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 10:32:55.694280    3508 ssh_runner.go:195] Run: openssl version
	I0717 10:32:55.698669    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:32:55.707011    3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:32:55.710297    3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:32:55.710338    3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:32:55.714541    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:32:55.722665    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:32:55.730951    3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:32:55.734212    3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:32:55.734256    3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:32:55.738428    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:32:55.746621    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:32:55.754849    3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:32:55.758298    3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:32:55.758341    3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:32:55.762565    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:32:55.770829    3508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:32:55.774715    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 10:32:55.780174    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 10:32:55.784640    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 10:32:55.789061    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 10:32:55.793372    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 10:32:55.797672    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 10:32:55.802149    3508 kubeadm.go:392] StartCluster: {Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:32:55.802263    3508 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 10:32:55.813831    3508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 10:32:55.821229    3508 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 10:32:55.821245    3508 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 10:32:55.821296    3508 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 10:32:55.828842    3508 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 10:32:55.829172    3508 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-572000" does not appear in /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:32:55.829253    3508 kubeconfig.go:62] /Users/jenkins/minikube-integration/19283-1099/kubeconfig needs updating (will repair): [kubeconfig missing "ha-572000" cluster setting kubeconfig missing "ha-572000" context setting]
	I0717 10:32:55.829432    3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.829834    3508 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:32:55.830028    3508 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x71e8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 10:32:55.830325    3508 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 10:32:55.830504    3508 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 10:32:55.837614    3508 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0717 10:32:55.837631    3508 kubeadm.go:597] duration metric: took 16.382346ms to restartPrimaryControlPlane
	I0717 10:32:55.837636    3508 kubeadm.go:394] duration metric: took 35.493194ms to StartCluster
	I0717 10:32:55.837647    3508 settings.go:142] acquiring lock: {Name:mkc45f011a907c66e2dbca7dadfff37ab48f7d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.837726    3508 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:32:55.838160    3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:32:55.838398    3508 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:32:55.838411    3508 start.go:241] waiting for startup goroutines ...
	I0717 10:32:55.838425    3508 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 10:32:55.838529    3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:32:55.881476    3508 out.go:177] * Enabled addons: 
	I0717 10:32:55.902556    3508 addons.go:510] duration metric: took 64.135812ms for enable addons: enabled=[]
	I0717 10:32:55.902605    3508 start.go:246] waiting for cluster config update ...
	I0717 10:32:55.902617    3508 start.go:255] writing updated cluster config ...
	I0717 10:32:55.924553    3508 out.go:177] 
	I0717 10:32:55.945720    3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:32:55.945818    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:32:55.967938    3508 out.go:177] * Starting "ha-572000-m02" control-plane node in "ha-572000" cluster
	I0717 10:32:56.010383    3508 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:32:56.010417    3508 cache.go:56] Caching tarball of preloaded images
	I0717 10:32:56.010593    3508 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:32:56.010613    3508 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:32:56.010735    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:32:56.011714    3508 start.go:360] acquireMachinesLock for ha-572000-m02: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:32:56.011815    3508 start.go:364] duration metric: took 76.983µs to acquireMachinesLock for "ha-572000-m02"
	I0717 10:32:56.011840    3508 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:32:56.011849    3508 fix.go:54] fixHost starting: m02
	I0717 10:32:56.012268    3508 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:32:56.012290    3508 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:32:56.021749    3508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51769
	I0717 10:32:56.022134    3508 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:32:56.022452    3508 main.go:141] libmachine: Using API Version  1
	I0717 10:32:56.022466    3508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:32:56.022707    3508 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:32:56.022831    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:32:56.022920    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetState
	I0717 10:32:56.023010    3508 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:56.023088    3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3461
	I0717 10:32:56.024015    3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3461 missing from process table
	I0717 10:32:56.024031    3508 fix.go:112] recreateIfNeeded on ha-572000-m02: state=Stopped err=<nil>
	I0717 10:32:56.024040    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	W0717 10:32:56.024134    3508 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:32:56.066377    3508 out.go:177] * Restarting existing hyperkit VM for "ha-572000-m02" ...
	I0717 10:32:56.087674    3508 main.go:141] libmachine: (ha-572000-m02) Calling .Start
	I0717 10:32:56.087950    3508 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:56.087999    3508 main.go:141] libmachine: (ha-572000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid
	I0717 10:32:56.089806    3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3461 missing from process table
	I0717 10:32:56.089821    3508 main.go:141] libmachine: (ha-572000-m02) DBG | pid 3461 is in state "Stopped"
	I0717 10:32:56.089839    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid...
	I0717 10:32:56.090122    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Using UUID b5da5881-83da-4916-aec8-9a96c30c8c05
	I0717 10:32:56.117133    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Generated MAC 2:60:33:0:68:8b
	I0717 10:32:56.117180    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:32:56.117265    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b5da5881-83da-4916-aec8-9a96c30c8c05", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:32:56.117293    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b5da5881-83da-4916-aec8-9a96c30c8c05", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:32:56.117357    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b5da5881-83da-4916-aec8-9a96c30c8c05", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/ha-572000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machine
s/ha-572000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:32:56.117402    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b5da5881-83da-4916-aec8-9a96c30c8c05 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/ha-572000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:32:56.117418    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:32:56.118762    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Pid is 3526
	I0717 10:32:56.119239    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Attempt 0
	I0717 10:32:56.119252    3508 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:32:56.119326    3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3526
	I0717 10:32:56.121158    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Searching for 2:60:33:0:68:8b in /var/db/dhcpd_leases ...
	I0717 10:32:56.121244    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:32:56.121275    3508 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669951be}
	I0717 10:32:56.121292    3508 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:37:45:6a:f1:7f ID:1,1e:37:45:6a:f1:7f Lease:0x6698001b}
	I0717 10:32:56.121303    3508 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x6699517a}
	I0717 10:32:56.121311    3508 main.go:141] libmachine: (ha-572000-m02) DBG | Found match: 2:60:33:0:68:8b
	I0717 10:32:56.121322    3508 main.go:141] libmachine: (ha-572000-m02) DBG | IP: 192.169.0.6
	I0717 10:32:56.121381    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetConfigRaw
	I0717 10:32:56.122119    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:32:56.122366    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:32:56.122967    3508 machine.go:94] provisionDockerMachine start ...
	I0717 10:32:56.122978    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:32:56.123097    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:32:56.123191    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:32:56.123279    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:32:56.123377    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:32:56.123509    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:32:56.123686    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:32:56.123860    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:32:56.123869    3508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:32:56.127424    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:32:56.136905    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:32:56.138099    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:32:56.138119    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:32:56.138127    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:32:56.138133    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:32:56.517427    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:32:56.517452    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:32:56.632129    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:32:56.632146    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:32:56.632154    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:32:56.632161    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:32:56.632978    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:32:56.632987    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:33:01.882277    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:33:01.882372    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:33:01.882381    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:33:01.905950    3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:33:07.183510    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:33:07.183524    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:33:07.183678    3508 buildroot.go:166] provisioning hostname "ha-572000-m02"
	I0717 10:33:07.183687    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:33:07.183789    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.183881    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.183992    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.184084    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.184179    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.184316    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:07.184458    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:07.184466    3508 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000-m02 && echo "ha-572000-m02" | sudo tee /etc/hostname
	I0717 10:33:07.250039    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000-m02
	
	I0717 10:33:07.250065    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.250206    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.250287    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.250390    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.250483    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.250636    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:07.250802    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:07.250815    3508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:33:07.311401    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:33:07.311420    3508 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:33:07.311431    3508 buildroot.go:174] setting up certificates
	I0717 10:33:07.311441    3508 provision.go:84] configureAuth start
	I0717 10:33:07.311448    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:33:07.311593    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:33:07.311680    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.311768    3508 provision.go:143] copyHostCerts
	I0717 10:33:07.311797    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:33:07.311852    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:33:07.311858    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:33:07.312271    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:33:07.312505    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:33:07.312536    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:33:07.312541    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:33:07.312619    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:33:07.312779    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:33:07.312811    3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:33:07.312816    3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:33:07.312912    3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:33:07.313069    3508 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000-m02 san=[127.0.0.1 192.169.0.6 ha-572000-m02 localhost minikube]
	I0717 10:33:07.375154    3508 provision.go:177] copyRemoteCerts
	I0717 10:33:07.375212    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:33:07.375227    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.375382    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.375473    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.375558    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.375656    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:33:07.409433    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:33:07.409505    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:33:07.429479    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:33:07.429539    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 10:33:07.451163    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:33:07.451231    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:33:07.471509    3508 provision.go:87] duration metric: took 160.057268ms to configureAuth
	I0717 10:33:07.471523    3508 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:33:07.471702    3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:33:07.471715    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:07.471860    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.471964    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.472045    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.472140    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.472216    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.472319    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:07.472438    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:07.472446    3508 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:33:07.526742    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:33:07.526766    3508 buildroot.go:70] root file system type: tmpfs
	I0717 10:33:07.526848    3508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:33:07.526860    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.526992    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.527094    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.527175    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.527248    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.527375    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:07.527510    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:07.527555    3508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:33:07.594480    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:33:07.594502    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:07.594640    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:07.594720    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.594808    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:07.594894    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:07.595019    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:07.595164    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:07.595178    3508 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:33:09.291500    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:33:09.291515    3508 machine.go:97] duration metric: took 13.164785942s to provisionDockerMachine
	I0717 10:33:09.291524    3508 start.go:293] postStartSetup for "ha-572000-m02" (driver="hyperkit")
	I0717 10:33:09.291531    3508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:33:09.291546    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.291729    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:33:09.291743    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:09.291855    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:09.291956    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.292049    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:09.292155    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:33:09.335381    3508 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:33:09.338532    3508 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:33:09.338541    3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:33:09.338631    3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:33:09.338771    3508 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:33:09.338778    3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:33:09.338937    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:33:09.346285    3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:33:09.366379    3508 start.go:296] duration metric: took 74.672934ms for postStartSetup
	I0717 10:33:09.366399    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.366579    3508 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:33:09.366592    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:09.366681    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:09.366764    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.366841    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:09.366910    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:33:09.399615    3508 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:33:09.399679    3508 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:33:09.453746    3508 fix.go:56] duration metric: took 13.437754461s for fixHost
	I0717 10:33:09.453771    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:09.453917    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:09.454023    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.454133    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.454219    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:09.454344    3508 main.go:141] libmachine: Using SSH client type: native
	I0717 10:33:09.454500    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:33:09.454509    3508 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:33:09.507516    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237589.628548940
	
	I0717 10:33:09.507529    3508 fix.go:216] guest clock: 1721237589.628548940
	I0717 10:33:09.507535    3508 fix.go:229] Guest: 2024-07-17 10:33:09.62854894 -0700 PDT Remote: 2024-07-17 10:33:09.453761 -0700 PDT m=+32.267325038 (delta=174.78794ms)
	I0717 10:33:09.507545    3508 fix.go:200] guest clock delta is within tolerance: 174.78794ms
	I0717 10:33:09.507551    3508 start.go:83] releasing machines lock for "ha-572000-m02", held for 13.491465012s
	I0717 10:33:09.507572    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.507699    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:33:09.532514    3508 out.go:177] * Found network options:
	I0717 10:33:09.552891    3508 out.go:177]   - NO_PROXY=192.169.0.5
	W0717 10:33:09.574387    3508 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:33:09.574424    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.575230    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.575434    3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:33:09.575533    3508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:33:09.575579    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	W0717 10:33:09.575674    3508 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:33:09.575742    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:09.575769    3508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:33:09.575787    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:33:09.575982    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.576003    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:33:09.576234    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:33:09.576305    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:09.576479    3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:33:09.576483    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:33:09.576596    3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	W0717 10:33:09.607732    3508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:33:09.607792    3508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:33:09.656923    3508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:33:09.656940    3508 start.go:495] detecting cgroup driver to use...
	I0717 10:33:09.657029    3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:33:09.673202    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:33:09.682149    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:33:09.691293    3508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:33:09.691348    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:33:09.700430    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:33:09.709231    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:33:09.718168    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:33:09.727036    3508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:33:09.736298    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:33:09.745642    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:33:09.754690    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:33:09.763621    3508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:33:09.771717    3508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:33:09.779861    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:33:09.883183    3508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:33:09.901989    3508 start.go:495] detecting cgroup driver to use...
	I0717 10:33:09.902056    3508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:33:09.919371    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:33:09.932597    3508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:33:09.953462    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:33:09.964583    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:33:09.975437    3508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:33:09.995754    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:33:10.006015    3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:33:10.020825    3508 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:33:10.023692    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:33:10.030648    3508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:33:10.044228    3508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:33:10.141170    3508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:33:10.249186    3508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:33:10.249214    3508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:33:10.263041    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:33:10.359716    3508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:34:11.416224    3508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.021941021s)
	I0717 10:34:11.416300    3508 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0717 10:34:11.450835    3508 out.go:177] 
	W0717 10:34:11.471671    3508 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 17:33:08 ha-572000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.044876852Z" level=info msg="Starting up"
	Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.045449556Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.049003475Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=496
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.064081003Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079222179Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079310364Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079376764Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079411371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079557600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079609621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079752864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079797312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079887739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079928799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.080046807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.080239575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.081923027Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.081977822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082123136Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082166838Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082275842Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082332754Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084273060Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084339651Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084378389Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084411359Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084442922Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084509418Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084664339Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084738339Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084774254Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084804627Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084874943Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084911894Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084942267Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084972768Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085003365Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085032856Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085062302Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085090775Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085129743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085161980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085192066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085224112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085253798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085286177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085315810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085345112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085374976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085410351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085440979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085471089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085500214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085532017Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085571085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085603089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085635203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085683933Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085717630Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085747936Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085777505Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085805608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085834007Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085861655Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086142807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086206245Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086259095Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086322237Z" level=info msg="containerd successfully booted in 0.022994s"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.065923436Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.108166477Z" level=info msg="Loading containers: start."
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.277192209Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.336888641Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.380874805Z" level=info msg="Loading containers: done."
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.387565385Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.387757279Z" level=info msg="Daemon has completed initialization"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.411794010Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.411982753Z" level=info msg="API listen on [::]:2376"
	Jul 17 17:33:09 ha-572000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.490943827Z" level=info msg="Processing signal 'terminated'"
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.491923813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.491997518Z" level=info msg="Daemon shutdown complete"
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.492029261Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.492040420Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 17:33:10 ha-572000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 17:33:11 ha-572000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 17:33:11 ha-572000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 17:33:11 ha-572000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 17:33:11 ha-572000-m02 dockerd[1164]: time="2024-07-17T17:33:11.528450348Z" level=info msg="Starting up"
	Jul 17 17:34:11 ha-572000-m02 dockerd[1164]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 17:34:11 ha-572000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 17:34:11 ha-572000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 17:34:11 ha-572000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0717 10:34:11.471802    3508 out.go:239] * 
	W0717 10:34:11.473037    3508 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:34:11.536857    3508 out.go:177] 
	
	
	==> Docker <==
	Jul 17 17:33:02 ha-572000 dockerd[1178]: time="2024-07-17T17:33:02.455192722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:33:23 ha-572000 dockerd[1178]: time="2024-07-17T17:33:23.740501414Z" level=info msg="shim disconnected" id=a6e0742139d63216679ae18bc09ccda23c0ed0e4d7a419b36e966729969bea9b namespace=moby
	Jul 17 17:33:23 ha-572000 dockerd[1178]: time="2024-07-17T17:33:23.740886535Z" level=warning msg="cleaning up after shim disconnected" id=a6e0742139d63216679ae18bc09ccda23c0ed0e4d7a419b36e966729969bea9b namespace=moby
	Jul 17 17:33:23 ha-572000 dockerd[1178]: time="2024-07-17T17:33:23.741204478Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 17 17:33:23 ha-572000 dockerd[1171]: time="2024-07-17T17:33:23.741723202Z" level=info msg="ignoring event" container=a6e0742139d63216679ae18bc09ccda23c0ed0e4d7a419b36e966729969bea9b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 17:33:24 ha-572000 dockerd[1178]: time="2024-07-17T17:33:24.747049658Z" level=info msg="shim disconnected" id=3f41d1a5d2c920e02b958fc57cddcf6e76eb7e4b56f3544ba5066b682eea0c8d namespace=moby
	Jul 17 17:33:24 ha-572000 dockerd[1178]: time="2024-07-17T17:33:24.747592119Z" level=warning msg="cleaning up after shim disconnected" id=3f41d1a5d2c920e02b958fc57cddcf6e76eb7e4b56f3544ba5066b682eea0c8d namespace=moby
	Jul 17 17:33:24 ha-572000 dockerd[1178]: time="2024-07-17T17:33:24.747636154Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 17 17:33:24 ha-572000 dockerd[1171]: time="2024-07-17T17:33:24.747788453Z" level=info msg="ignoring event" container=3f41d1a5d2c920e02b958fc57cddcf6e76eb7e4b56f3544ba5066b682eea0c8d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 17:33:34 ha-572000 dockerd[1178]: time="2024-07-17T17:33:34.836028865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:33:34 ha-572000 dockerd[1178]: time="2024-07-17T17:33:34.836093957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:33:34 ha-572000 dockerd[1178]: time="2024-07-17T17:33:34.836105101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:33:34 ha-572000 dockerd[1178]: time="2024-07-17T17:33:34.836225522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:33:38 ha-572000 dockerd[1178]: time="2024-07-17T17:33:38.652806846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:33:38 ha-572000 dockerd[1178]: time="2024-07-17T17:33:38.652893670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:33:38 ha-572000 dockerd[1178]: time="2024-07-17T17:33:38.652906541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:33:38 ha-572000 dockerd[1178]: time="2024-07-17T17:33:38.657845113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:33:59 ha-572000 dockerd[1171]: time="2024-07-17T17:33:59.069677227Z" level=info msg="ignoring event" container=8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 17:33:59 ha-572000 dockerd[1178]: time="2024-07-17T17:33:59.071115848Z" level=info msg="shim disconnected" id=8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f namespace=moby
	Jul 17 17:33:59 ha-572000 dockerd[1178]: time="2024-07-17T17:33:59.071609934Z" level=warning msg="cleaning up after shim disconnected" id=8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f namespace=moby
	Jul 17 17:33:59 ha-572000 dockerd[1178]: time="2024-07-17T17:33:59.071768605Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 17 17:34:00 ha-572000 dockerd[1171]: time="2024-07-17T17:34:00.079691666Z" level=info msg="ignoring event" container=1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 17:34:00 ha-572000 dockerd[1178]: time="2024-07-17T17:34:00.081342846Z" level=info msg="shim disconnected" id=1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca namespace=moby
	Jul 17 17:34:00 ha-572000 dockerd[1178]: time="2024-07-17T17:34:00.081524291Z" level=warning msg="cleaning up after shim disconnected" id=1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca namespace=moby
	Jul 17 17:34:00 ha-572000 dockerd[1178]: time="2024-07-17T17:34:00.081549356Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8f09c09ed996a       56ce0fd9fb532                                                                                         40 seconds ago       Exited              kube-apiserver            2                   6d7eb0e874999       kube-apiserver-ha-572000
	1e8f9939826f4       e874818b3caac                                                                                         44 seconds ago       Exited              kube-controller-manager   2                   b7d58c526c444       kube-controller-manager-ha-572000
	138bf6784d59c       38af8ddebf499                                                                                         About a minute ago   Running             kube-vip                  0                   df04438a4c5cc       kube-vip-ha-572000
	a53f8fcdf5d97       7820c83aa1394                                                                                         About a minute ago   Running             kube-scheduler            1                   bfd880612991e       kube-scheduler-ha-572000
	a3398a8ca33aa       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      1                   986ceb5a6f870       etcd-ha-572000
	e1a5eb1bed550       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 minutes ago        Exited              busybox                   0                   29ab413131af2       busybox-fc5497c4f-5r4wl
	bb44d784bb7ab       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   b8c622f08395f       coredns-7db6d8ff4d-2phrp
	7b275812468c9       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   2588bd7c40c23       coredns-7db6d8ff4d-9dzd5
	12ba2e181ee9a       6e38f40d628db                                                                                         6 minutes ago        Exited              storage-provisioner       0                   04b7cdcbedf20       storage-provisioner
	6e40e1427ab20       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              6 minutes ago        Exited              kindnet-cni               0                   32dc836c0a2df       kindnet-t85bv
	2aeed19835352       53c535741fb44                                                                                         6 minutes ago        Exited              kube-proxy                0                   f688e08d591be       kube-proxy-hst7h
	9200160f355ce       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago        Exited              kube-vip                  0                   1742f4f388abf       kube-vip-ha-572000
	e29f4fe295c1c       7820c83aa1394                                                                                         6 minutes ago        Exited              kube-scheduler            0                   25d825604d9f6       kube-scheduler-ha-572000
	c6527d620dad2       3861cfcd7c04c                                                                                         6 minutes ago        Exited              etcd                      0                   8844aab508d79       etcd-ha-572000
	
	
	==> coredns [7b275812468c] <==
	[INFO] 10.244.0.4:49035 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001628883s
	[INFO] 10.244.0.4:59665 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081818s
	[INFO] 10.244.0.4:52274 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077378s
	[INFO] 10.244.1.2:47113 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103907s
	[INFO] 10.244.2.2:57796 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060561s
	[INFO] 10.244.2.2:40339 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000425199s
	[INFO] 10.244.2.2:38339 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010468s
	[INFO] 10.244.2.2:50750 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070547s
	[INFO] 10.244.0.4:57426 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000045039s
	[INFO] 10.244.0.4:44283 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031195s
	[INFO] 10.244.1.2:56783 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117359s
	[INFO] 10.244.1.2:34315 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061358s
	[INFO] 10.244.1.2:55792 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062223s
	[INFO] 10.244.2.2:35768 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086426s
	[INFO] 10.244.2.2:42473 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102022s
	[INFO] 10.244.0.4:53524 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000087177s
	[INFO] 10.244.0.4:35224 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079092s
	[INFO] 10.244.0.4:53020 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075585s
	[INFO] 10.244.1.2:58074 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138293s
	[INFO] 10.244.1.2:40567 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000088337s
	[INFO] 10.244.1.2:46301 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000065328s
	[INFO] 10.244.1.2:59898 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075157s
	[INFO] 10.244.2.2:48754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000081888s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bb44d784bb7a] <==
	[INFO] 10.244.0.4:54052 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001019206s
	[INFO] 10.244.0.4:45412 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077905s
	[INFO] 10.244.0.4:37924 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159382s
	[INFO] 10.244.1.2:45862 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108196s
	[INFO] 10.244.1.2:47464 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000616154s
	[INFO] 10.244.1.2:53720 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059338s
	[INFO] 10.244.1.2:49774 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123053s
	[INFO] 10.244.1.2:54472 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000097287s
	[INFO] 10.244.1.2:52054 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064095s
	[INFO] 10.244.1.2:54428 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064142s
	[INFO] 10.244.2.2:53088 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109393s
	[INFO] 10.244.2.2:49993 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000114279s
	[INFO] 10.244.2.2:42708 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063661s
	[INFO] 10.244.2.2:57150 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000043434s
	[INFO] 10.244.0.4:45987 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112397s
	[INFO] 10.244.0.4:44116 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057797s
	[INFO] 10.244.1.2:37635 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105428s
	[INFO] 10.244.2.2:52081 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131248s
	[INFO] 10.244.2.2:54912 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087858s
	[INFO] 10.244.0.4:53981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079367s
	[INFO] 10.244.2.2:45850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152036s
	[INFO] 10.244.2.2:36004 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000223s
	[INFO] 10.244.2.2:54459 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000128834s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0717 17:34:18.799742    2935 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0717 17:34:18.800606    2935 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0717 17:34:18.803507    2935 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0717 17:34:18.804205    2935 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0717 17:34:18.806215    2935 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006777] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.574354] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +1.320177] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.823982] systemd-fstab-generator[476]: Ignoring "noauto" option for root device
	[  +0.112634] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
	[  +1.921936] systemd-fstab-generator[1101]: Ignoring "noauto" option for root device
	[  +0.055582] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.184169] systemd-fstab-generator[1137]: Ignoring "noauto" option for root device
	[  +0.104772] systemd-fstab-generator[1149]: Ignoring "noauto" option for root device
	[  +0.113810] systemd-fstab-generator[1163]: Ignoring "noauto" option for root device
	[  +2.482664] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.099717] systemd-fstab-generator[1391]: Ignoring "noauto" option for root device
	[  +0.099983] systemd-fstab-generator[1403]: Ignoring "noauto" option for root device
	[  +0.118727] systemd-fstab-generator[1418]: Ignoring "noauto" option for root device
	[  +0.437794] systemd-fstab-generator[1575]: Ignoring "noauto" option for root device
	[Jul17 17:33] kauditd_printk_skb: 234 callbacks suppressed
	[ +21.575073] kauditd_printk_skb: 40 callbacks suppressed
	[ +31.253030] clocksource: timekeeping watchdog on CPU1: Marking clocksource 'tsc' as unstable because the skew is too large:
	[  +0.000025] clocksource:                       'hpet' wd_now: 2db2a3c3 wd_last: 2d0e4271 mask: ffffffff
	[  +0.000022] clocksource:                       'tsc' cs_now: 5d6b30d2ea8 cs_last: 5d5e653cfb0 mask: ffffffffffffffff
	[  +0.001528] TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'.
	[  +0.002348] clocksource: Checking clocksource tsc synchronization from CPU 0.
	
	
	==> etcd [a3398a8ca33a] <==
	{"level":"info","ts":"2024-07-17T17:34:13.279609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:13.279634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:13.279658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:13.279676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:14.979641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:14.979698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:14.979724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:14.979755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:14.979817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"warn","ts":"2024-07-17T17:34:15.896192Z","caller":"etcdhttp/health.go:232","msg":"serving /health false; no leader"}
	{"level":"warn","ts":"2024-07-17T17:34:15.896261Z","caller":"etcdhttp/health.go:119","msg":"/health error","output":"{\"health\":\"false\",\"reason\":\"RAFT NO LEADER\"}","status-code":503}
	{"level":"info","ts":"2024-07-17T17:34:16.684333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:16.68445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:16.684491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:16.684542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:16.684591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"warn","ts":"2024-07-17T17:34:17.933316Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f6a6d94fe6ea5bc8","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:34:17.933579Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1d3f36ee75516151","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-17T17:34:17.93435Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f6a6d94fe6ea5bc8","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:34:17.934896Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1d3f36ee75516151","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: no route to host"}
	{"level":"info","ts":"2024-07-17T17:34:18.383488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:18.383591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:18.383628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:18.3837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:34:18.38375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	
	
	==> etcd [c6527d620dad] <==
	{"level":"warn","ts":"2024-07-17T17:32:29.48769Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T17:32:24.462555Z","time spent":"5.025134128s","remote":"127.0.0.1:36734","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/07/17 17:32:29 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T17:32:29.48774Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T17:32:25.149839Z","time spent":"4.337900582s","remote":"127.0.0.1:45174","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/07/17 17:32:29 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T17:32:29.512674Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T17:32:29.512703Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T17:32:29.512731Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"warn","ts":"2024-07-17T17:32:29.512821Z","caller":"etcdserver/server.go:1165","msg":"failed to revoke lease","lease-id":"584490c1bc074071","error":"etcdserver: request cancelled"}
	{"level":"info","ts":"2024-07-17T17:32:29.512836Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.512844Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.512857Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.512905Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.512927Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.512948Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.512956Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f6a6d94fe6ea5bc8"}
	{"level":"info","ts":"2024-07-17T17:32:29.51296Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.512966Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.512977Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.513753Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.513778Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.516864Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.516891Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:32:29.518343Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-07-17T17:32:29.51839Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-07-17T17:32:29.518397Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-572000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 17:34:19 up 1 min,  0 users,  load average: 0.08, 0.05, 0.01
	Linux ha-572000 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6e40e1427ab2] <==
	I0717 17:31:56.892269       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:06.898213       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:06.898253       1 main.go:303] handling current node
	I0717 17:32:06.898264       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:06.898269       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:06.898416       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:06.898443       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:06.898526       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:06.898555       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:16.896377       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:16.896415       1 main.go:303] handling current node
	I0717 17:32:16.896426       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:16.896432       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:16.896606       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:16.896636       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:16.896674       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:16.896699       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:26.896557       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:26.896622       1 main.go:303] handling current node
	I0717 17:32:26.896678       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:26.896718       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:26.896938       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:26.897017       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:26.897158       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:26.897880       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8f09c09ed996] <==
	I0717 17:33:38.766324       1 options.go:221] external host was not specified, using 192.169.0.5
	I0717 17:33:38.766955       1 server.go:148] Version: v1.30.2
	I0717 17:33:38.767101       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:33:39.044188       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0717 17:33:39.046954       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 17:33:39.049409       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0717 17:33:39.049435       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0717 17:33:39.049563       1 instance.go:299] Using reconciler: lease
	W0717 17:33:59.045294       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0717 17:33:59.045986       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0717 17:33:59.051243       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0717 17:33:59.051294       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [1e8f9939826f] <==
	I0717 17:33:35.199426       1 serving.go:380] Generated self-signed cert in-memory
	I0717 17:33:35.611724       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0717 17:33:35.611860       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:33:35.612992       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 17:33:35.613172       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0717 17:33:35.613294       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0717 17:33:35.613433       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0717 17:34:00.060238       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:34220->192.169.0.5:8443: read: connection reset by peer"
	
	
	==> kube-proxy [2aeed1983535] <==
	I0717 17:27:43.315695       1 server_linux.go:69] "Using iptables proxy"
	I0717 17:27:43.322673       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0717 17:27:43.354011       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:27:43.354032       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:27:43.354044       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:27:43.355997       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:27:43.356216       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:27:43.356225       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:27:43.356874       1 config.go:192] "Starting service config controller"
	I0717 17:27:43.356903       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:27:43.356921       1 config.go:319] "Starting node config controller"
	I0717 17:27:43.356943       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:27:43.357077       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:27:43.357144       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:27:43.457513       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 17:27:43.457607       1 shared_informer.go:320] Caches are synced for node config
	I0717 17:27:43.457639       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [a53f8fcdf5d9] <==
	E0717 17:34:08.253923       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:08.255527       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:08.255685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:09.119507       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:09.120276       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:09.844542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:09.845089       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:11.002782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:11.003425       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:11.004959       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:11.005426       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:11.724425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:11.724599       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:13.328905       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:13.329009       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:13.526537       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:13.526638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:14.532488       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:14.533163       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:14.949017       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:14.949354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:18.098762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:18.099286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:34:18.884819       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:34:18.885528       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	
	
	==> kube-scheduler [e29f4fe295c1] <==
	W0717 17:27:25.906822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 17:27:25.906857       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 17:27:25.906870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 17:27:25.906912       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 17:27:26.715070       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 17:27:26.715127       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 17:27:26.797242       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 17:27:26.797298       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 17:27:26.957071       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 17:27:26.957111       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 17:27:27.013148       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 17:27:27.013190       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 17:27:29.895450       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 17:30:13.328557       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-jhz2d\": pod busybox-fc5497c4f-jhz2d is already assigned to node \"ha-572000-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-jhz2d" node="ha-572000-m03"
	E0717 17:30:13.329015       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2f9e6064-727c-486c-b925-3ce5866e42ff(default/busybox-fc5497c4f-jhz2d) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-jhz2d"
	E0717 17:30:13.329121       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-jhz2d\": pod busybox-fc5497c4f-jhz2d is already assigned to node \"ha-572000-m03\"" pod="default/busybox-fc5497c4f-jhz2d"
	I0717 17:30:13.329256       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-jhz2d" node="ha-572000-m03"
	E0717 17:30:13.362412       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-zwhws\": pod busybox-fc5497c4f-zwhws is already assigned to node \"ha-572000\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-zwhws" node="ha-572000"
	E0717 17:30:13.362474       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-zwhws\": pod busybox-fc5497c4f-zwhws is already assigned to node \"ha-572000\"" pod="default/busybox-fc5497c4f-zwhws"
	E0717 17:30:13.441720       1 schedule_one.go:1067] "Error occurred" err="Pod default/busybox-fc5497c4f-l7sqr is already present in the active queue" pod="default/busybox-fc5497c4f-l7sqr"
	E0717 17:30:39.870609       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5wcph\": pod kube-proxy-5wcph is already assigned to node \"ha-572000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5wcph" node="ha-572000-m04"
	E0717 17:30:39.870661       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 731f5b57-131e-4e97-b47a-036b8d4edbcd(kube-system/kube-proxy-5wcph) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5wcph"
	E0717 17:30:39.870672       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5wcph\": pod kube-proxy-5wcph is already assigned to node \"ha-572000-m04\"" pod="kube-system/kube-proxy-5wcph"
	I0717 17:30:39.870686       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5wcph" node="ha-572000-m04"
	E0717 17:32:29.355082       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 17 17:34:00 ha-572000 kubelet[1582]: E0717 17:34:00.145681    1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-572000_kube-system(2508736da9632d6a9c9aaf9c250b4f65)\"" pod="kube-system/kube-apiserver-ha-572000" podUID="2508736da9632d6a9c9aaf9c250b4f65"
	Jul 17 17:34:00 ha-572000 kubelet[1582]: I0717 17:34:00.150461    1582 scope.go:117] "RemoveContainer" containerID="a6e0742139d63216679ae18bc09ccda23c0ed0e4d7a419b36e966729969bea9b"
	Jul 17 17:34:00 ha-572000 kubelet[1582]: I0717 17:34:00.925798    1582 kubelet_node_status.go:73] "Attempting to register node" node="ha-572000"
	Jul 17 17:34:03 ha-572000 kubelet[1582]: E0717 17:34:03.198768    1582 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-572000"
	Jul 17 17:34:03 ha-572000 kubelet[1582]: E0717 17:34:03.199285    1582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-572000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Jul 17 17:34:03 ha-572000 kubelet[1582]: I0717 17:34:03.360896    1582 scope.go:117] "RemoveContainer" containerID="1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca"
	Jul 17 17:34:03 ha-572000 kubelet[1582]: E0717 17:34:03.361398    1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-572000_kube-system(967ef612fac8855d0ef3892ca2ae35cc)\"" pod="kube-system/kube-controller-manager-ha-572000" podUID="967ef612fac8855d0ef3892ca2ae35cc"
	Jul 17 17:34:04 ha-572000 kubelet[1582]: I0717 17:34:04.792263    1582 scope.go:117] "RemoveContainer" containerID="1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca"
	Jul 17 17:34:04 ha-572000 kubelet[1582]: E0717 17:34:04.792672    1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-572000_kube-system(967ef612fac8855d0ef3892ca2ae35cc)\"" pod="kube-system/kube-controller-manager-ha-572000" podUID="967ef612fac8855d0ef3892ca2ae35cc"
	Jul 17 17:34:05 ha-572000 kubelet[1582]: I0717 17:34:05.369319    1582 scope.go:117] "RemoveContainer" containerID="8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f"
	Jul 17 17:34:05 ha-572000 kubelet[1582]: E0717 17:34:05.369956    1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-572000_kube-system(2508736da9632d6a9c9aaf9c250b4f65)\"" pod="kube-system/kube-apiserver-ha-572000" podUID="2508736da9632d6a9c9aaf9c250b4f65"
	Jul 17 17:34:05 ha-572000 kubelet[1582]: E0717 17:34:05.660481    1582 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-572000\" not found"
	Jul 17 17:34:08 ha-572000 kubelet[1582]: I0717 17:34:08.082261    1582 scope.go:117] "RemoveContainer" containerID="8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f"
	Jul 17 17:34:08 ha-572000 kubelet[1582]: E0717 17:34:08.082649    1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-572000_kube-system(2508736da9632d6a9c9aaf9c250b4f65)\"" pod="kube-system/kube-apiserver-ha-572000" podUID="2508736da9632d6a9c9aaf9c250b4f65"
	Jul 17 17:34:09 ha-572000 kubelet[1582]: E0717 17:34:09.342982    1582 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-572000.17e310749989a167  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-572000,UID:ha-572000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-572000,},FirstTimestamp:2024-07-17 17:32:55.563845991 +0000 UTC m=+0.198827444,LastTimestamp:2024-07-17 17:32:55.563845991 +0000 UTC m=+0.198827444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-572000,}"
	Jul 17 17:34:10 ha-572000 kubelet[1582]: I0717 17:34:10.207057    1582 kubelet_node_status.go:73] "Attempting to register node" node="ha-572000"
	Jul 17 17:34:12 ha-572000 kubelet[1582]: E0717 17:34:12.418909    1582 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-572000"
	Jul 17 17:34:12 ha-572000 kubelet[1582]: E0717 17:34:12.419039    1582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-572000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Jul 17 17:34:15 ha-572000 kubelet[1582]: W0717 17:34:15.486570    1582 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Jul 17 17:34:15 ha-572000 kubelet[1582]: E0717 17:34:15.486678    1582 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Jul 17 17:34:15 ha-572000 kubelet[1582]: E0717 17:34:15.662153    1582 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-572000\" not found"
	Jul 17 17:34:16 ha-572000 kubelet[1582]: I0717 17:34:16.604071    1582 scope.go:117] "RemoveContainer" containerID="1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca"
	Jul 17 17:34:16 ha-572000 kubelet[1582]: E0717 17:34:16.604626    1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-572000_kube-system(967ef612fac8855d0ef3892ca2ae35cc)\"" pod="kube-system/kube-controller-manager-ha-572000" podUID="967ef612fac8855d0ef3892ca2ae35cc"
	Jul 17 17:34:19 ha-572000 kubelet[1582]: I0717 17:34:19.421292    1582 kubelet_node_status.go:73] "Attempting to register node" node="ha-572000"
	Jul 17 17:34:19 ha-572000 kubelet[1582]: I0717 17:34:19.606575    1582 scope.go:117] "RemoveContainer" containerID="8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-572000 -n ha-572000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-572000 -n ha-572000: exit status 2 (14.118246773s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-572000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (16.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (166.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 stop -v=7 --alsologtostderr
E0717 10:34:48.126087    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 10:36:11.176204    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-572000 stop -v=7 --alsologtostderr: (2m46.764947726s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr: exit status 7 (106.97256ms)

                                                
                                                
-- stdout --
	ha-572000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-572000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-572000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-572000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:37:20.986102    3627 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:37:20.986313    3627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:37:20.986319    3627 out.go:304] Setting ErrFile to fd 2...
	I0717 10:37:20.986322    3627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:37:20.986493    3627 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	I0717 10:37:20.986680    3627 out.go:298] Setting JSON to false
	I0717 10:37:20.986702    3627 mustload.go:65] Loading cluster: ha-572000
	I0717 10:37:20.986738    3627 notify.go:220] Checking for updates...
	I0717 10:37:20.987015    3627 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:20.987030    3627 status.go:255] checking status of ha-572000 ...
	I0717 10:37:20.987391    3627 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:20.987445    3627 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:20.996479    3627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51920
	I0717 10:37:20.996803    3627 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:20.997204    3627 main.go:141] libmachine: Using API Version  1
	I0717 10:37:20.997215    3627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:20.997436    3627 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:20.997545    3627 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:37:20.997634    3627 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:20.997701    3627 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3521
	I0717 10:37:20.998616    3627 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 3521 missing from process table
	I0717 10:37:20.998646    3627 status.go:330] ha-572000 host status = "Stopped" (err=<nil>)
	I0717 10:37:20.998656    3627 status.go:343] host is not running, skipping remaining checks
	I0717 10:37:20.998664    3627 status.go:257] ha-572000 status: &{Name:ha-572000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 10:37:20.998682    3627 status.go:255] checking status of ha-572000-m02 ...
	I0717 10:37:20.998924    3627 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:20.998946    3627 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:21.007340    3627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51922
	I0717 10:37:21.007674    3627 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:21.008013    3627 main.go:141] libmachine: Using API Version  1
	I0717 10:37:21.008025    3627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:21.008302    3627 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:21.008450    3627 main.go:141] libmachine: (ha-572000-m02) Calling .GetState
	I0717 10:37:21.008542    3627 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:21.008607    3627 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3526
	I0717 10:37:21.015153    3627 status.go:330] ha-572000-m02 host status = "Stopped" (err=<nil>)
	I0717 10:37:21.015163    3627 status.go:343] host is not running, skipping remaining checks
	I0717 10:37:21.015176    3627 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3526 missing from process table
	I0717 10:37:21.015171    3627 status.go:257] ha-572000-m02 status: &{Name:ha-572000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 10:37:21.015189    3627 status.go:255] checking status of ha-572000-m03 ...
	I0717 10:37:21.015435    3627 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:21.015479    3627 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:21.023846    3627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51924
	I0717 10:37:21.024228    3627 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:21.024608    3627 main.go:141] libmachine: Using API Version  1
	I0717 10:37:21.024619    3627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:21.024851    3627 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:21.024959    3627 main.go:141] libmachine: (ha-572000-m03) Calling .GetState
	I0717 10:37:21.025052    3627 main.go:141] libmachine: (ha-572000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:21.025120    3627 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid from json: 2972
	I0717 10:37:21.026022    3627 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid 2972 missing from process table
	I0717 10:37:21.026054    3627 status.go:330] ha-572000-m03 host status = "Stopped" (err=<nil>)
	I0717 10:37:21.026061    3627 status.go:343] host is not running, skipping remaining checks
	I0717 10:37:21.026070    3627 status.go:257] ha-572000-m03 status: &{Name:ha-572000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 10:37:21.026079    3627 status.go:255] checking status of ha-572000-m04 ...
	I0717 10:37:21.026319    3627 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:21.026339    3627 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:21.034784    3627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51926
	I0717 10:37:21.035080    3627 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:21.035399    3627 main.go:141] libmachine: Using API Version  1
	I0717 10:37:21.035409    3627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:21.035618    3627 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:21.035715    3627 main.go:141] libmachine: (ha-572000-m04) Calling .GetState
	I0717 10:37:21.035790    3627 main.go:141] libmachine: (ha-572000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:21.035861    3627 main.go:141] libmachine: (ha-572000-m04) DBG | hyperkit pid from json: 3096
	I0717 10:37:21.036762    3627 main.go:141] libmachine: (ha-572000-m04) DBG | hyperkit pid 3096 missing from process table
	I0717 10:37:21.036789    3627 status.go:330] ha-572000-m04 host status = "Stopped" (err=<nil>)
	I0717 10:37:21.036798    3627 status.go:343] host is not running, skipping remaining checks
	I0717 10:37:21.036805    3627 status.go:257] ha-572000-m04 status: &{Name:ha-572000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr": ha-572000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-572000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-572000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-572000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr": ha-572000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-572000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-572000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-572000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr": ha-572000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-572000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-572000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-572000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-572000 -n ha-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-572000 -n ha-572000: exit status 7 (68.231248ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-572000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (166.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (177.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-572000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
E0717 10:38:50.064229    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 10:39:48.133164    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-572000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : (2m52.347351296s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr
ha_test.go:571: status says not two control-plane nodes are present: args "out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr": ha-572000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:574: status says not three hosts are running: args "out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr": ha-572000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:577: status says not three kubelets are running: args "out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr": ha-572000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:580: status says not two apiservers are running: args "out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr": ha-572000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' True
	 True
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-572000 -n ha-572000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-572000 logs -n 25: (3.427820354s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-572000 cp ha-572000-m03:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04:/home/docker/cp-test_ha-572000-m03_ha-572000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m04 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m03_ha-572000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-572000 cp testdata/cp-test.txt                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3497274976/001/cp-test_ha-572000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000:/home/docker/cp-test_ha-572000-m04_ha-572000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000 sudo cat                                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m02:/home/docker/cp-test_ha-572000-m04_ha-572000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m02 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m03:/home/docker/cp-test_ha-572000-m04_ha-572000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m03 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-572000 node stop m02 -v=7                                                                                                 | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-572000 node start m02 -v=7                                                                                                | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:32 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-572000 -v=7                                                                                                       | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-572000 -v=7                                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT | 17 Jul 24 10:32 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-572000 --wait=true -v=7                                                                                                | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-572000                                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:34 PDT |                     |
	| node    | ha-572000 node delete m03 -v=7                                                                                               | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:34 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-572000 stop -v=7                                                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:34 PDT | 17 Jul 24 10:37 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-572000 --wait=true                                                                                                     | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:37 PDT | 17 Jul 24 10:40 PDT |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 10:37:21
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 10:37:21.160279    3636 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:37:21.160444    3636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:37:21.160449    3636 out.go:304] Setting ErrFile to fd 2...
	I0717 10:37:21.160453    3636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:37:21.160640    3636 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	I0717 10:37:21.162037    3636 out.go:298] Setting JSON to false
	I0717 10:37:21.184380    3636 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2212,"bootTime":1721235629,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0717 10:37:21.184474    3636 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:37:21.206845    3636 out.go:177] * [ha-572000] minikube v1.33.1 on Darwin 14.5
	I0717 10:37:21.250316    3636 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:37:21.250374    3636 notify.go:220] Checking for updates...
	I0717 10:37:21.294243    3636 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:37:21.315083    3636 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 10:37:21.336268    3636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:37:21.357529    3636 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	I0717 10:37:21.379368    3636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:37:21.401138    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:21.401903    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:21.401985    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:21.411459    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51932
	I0717 10:37:21.411825    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:21.412241    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:21.412256    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:21.412501    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:21.412634    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:21.412826    3636 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:37:21.413099    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:21.413120    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:21.421537    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51934
	I0717 10:37:21.421880    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:21.422209    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:21.422224    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:21.422446    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:21.422563    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:21.451265    3636 out.go:177] * Using the hyperkit driver based on existing profile
	I0717 10:37:21.493400    3636 start.go:297] selected driver: hyperkit
	I0717 10:37:21.493425    3636 start.go:901] validating driver "hyperkit" against &{Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclas
s:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:37:21.493682    3636 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:37:21.493865    3636 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:37:21.494086    3636 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19283-1099/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0717 10:37:21.503763    3636 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0717 10:37:21.507648    3636 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:21.507668    3636 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0717 10:37:21.510386    3636 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:37:21.510420    3636 cni.go:84] Creating CNI manager for ""
	I0717 10:37:21.510429    3636 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 10:37:21.510503    3636 start.go:340] cluster config:
	{Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:37:21.510603    3636 iso.go:125] acquiring lock: {Name:mkf51f842bcc8a77e9c7c50d642c4c76848e96af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:37:21.554326    3636 out.go:177] * Starting "ha-572000" primary control-plane node in "ha-572000" cluster
	I0717 10:37:21.575453    3636 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:37:21.575524    3636 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0717 10:37:21.575584    3636 cache.go:56] Caching tarball of preloaded images
	I0717 10:37:21.575806    3636 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:37:21.575825    3636 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:37:21.576014    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:37:21.577007    3636 start.go:360] acquireMachinesLock for ha-572000: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:37:21.577135    3636 start.go:364] duration metric: took 100.667µs to acquireMachinesLock for "ha-572000"
	I0717 10:37:21.577166    3636 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:37:21.577183    3636 fix.go:54] fixHost starting: 
	I0717 10:37:21.577591    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:21.577617    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:21.586612    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51936
	I0717 10:37:21.586997    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:21.587342    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:21.587357    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:21.587563    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:21.587707    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:21.587805    3636 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:37:21.587906    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:21.587984    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3521
	I0717 10:37:21.588936    3636 fix.go:112] recreateIfNeeded on ha-572000: state=Stopped err=<nil>
	I0717 10:37:21.588955    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:21.588954    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 3521 missing from process table
	W0717 10:37:21.589054    3636 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:37:21.631187    3636 out.go:177] * Restarting existing hyperkit VM for "ha-572000" ...
	I0717 10:37:21.652411    3636 main.go:141] libmachine: (ha-572000) Calling .Start
	I0717 10:37:21.652671    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:21.652780    3636 main.go:141] libmachine: (ha-572000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid
	I0717 10:37:21.654451    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 3521 missing from process table
	I0717 10:37:21.654462    3636 main.go:141] libmachine: (ha-572000) DBG | pid 3521 is in state "Stopped"
	I0717 10:37:21.654497    3636 main.go:141] libmachine: (ha-572000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid...
	I0717 10:37:21.654867    3636 main.go:141] libmachine: (ha-572000) DBG | Using UUID 5f2666de-0b32-4258-9840-7856c1bd4173
	I0717 10:37:21.763705    3636 main.go:141] libmachine: (ha-572000) DBG | Generated MAC d2:a6:10:ad:80:98
	I0717 10:37:21.763739    3636 main.go:141] libmachine: (ha-572000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:37:21.763844    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f2666de-0b32-4258-9840-7856c1bd4173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8a80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:37:21.763875    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f2666de-0b32-4258-9840-7856c1bd4173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8a80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:37:21.763912    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5f2666de-0b32-4258-9840-7856c1bd4173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/ha-572000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:37:21.763957    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5f2666de-0b32-4258-9840-7856c1bd4173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/ha-572000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:37:21.763980    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:37:21.765595    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: Pid is 3650
	I0717 10:37:21.766010    3636 main.go:141] libmachine: (ha-572000) DBG | Attempt 0
	I0717 10:37:21.766020    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:21.766092    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3650
	I0717 10:37:21.767880    3636 main.go:141] libmachine: (ha-572000) DBG | Searching for d2:a6:10:ad:80:98 in /var/db/dhcpd_leases ...
	I0717 10:37:21.767940    3636 main.go:141] libmachine: (ha-572000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:37:21.767961    3636 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x669951d0}
	I0717 10:37:21.767972    3636 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669951be}
	I0717 10:37:21.767977    3636 main.go:141] libmachine: (ha-572000) DBG | Found match: d2:a6:10:ad:80:98
	I0717 10:37:21.767984    3636 main.go:141] libmachine: (ha-572000) DBG | IP: 192.169.0.5
	I0717 10:37:21.768041    3636 main.go:141] libmachine: (ha-572000) Calling .GetConfigRaw
	I0717 10:37:21.768653    3636 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:37:21.768835    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:37:21.769276    3636 machine.go:94] provisionDockerMachine start ...
	I0717 10:37:21.769288    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:21.769440    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:21.769559    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:21.769675    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:21.769782    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:21.769886    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:21.770036    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:21.770285    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:21.770298    3636 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:37:21.773346    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:37:21.825199    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:37:21.825892    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:37:21.825902    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:37:21.825909    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:37:21.825917    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:37:22.200252    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:37:22.200268    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:37:22.314927    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:37:22.314948    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:37:22.314982    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:37:22.314999    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:37:22.315852    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:37:22.315864    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:37:27.580528    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:27 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:37:27.580565    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:27 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:37:27.580573    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:27 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:37:27.604198    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:27 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:37:32.830003    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:37:32.830021    3636 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:37:32.830158    3636 buildroot.go:166] provisioning hostname "ha-572000"
	I0717 10:37:32.830170    3636 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:37:32.830268    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:32.830359    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:32.830451    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:32.830548    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:32.830646    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:32.830800    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:32.830958    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:32.830967    3636 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000 && echo "ha-572000" | sudo tee /etc/hostname
	I0717 10:37:32.892396    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000
	
	I0717 10:37:32.892414    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:32.892535    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:32.892617    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:32.892697    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:32.892768    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:32.892926    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:32.893069    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:32.893080    3636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:37:32.952066    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:37:32.952086    3636 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:37:32.952098    3636 buildroot.go:174] setting up certificates
	I0717 10:37:32.952109    3636 provision.go:84] configureAuth start
	I0717 10:37:32.952116    3636 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:37:32.952255    3636 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:37:32.952365    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:32.952464    3636 provision.go:143] copyHostCerts
	I0717 10:37:32.952503    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:37:32.952585    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:37:32.952594    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:37:32.952749    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:37:32.952965    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:37:32.953012    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:37:32.953018    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:37:32.953117    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:37:32.953281    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:37:32.953328    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:37:32.953333    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:37:32.953420    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:37:32.953574    3636 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000 san=[127.0.0.1 192.169.0.5 ha-572000 localhost minikube]
	I0717 10:37:33.013099    3636 provision.go:177] copyRemoteCerts
	I0717 10:37:33.013145    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:37:33.013161    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:33.013272    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:33.013371    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.013543    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:33.013682    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:33.045521    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:37:33.045593    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:37:33.064633    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:37:33.064699    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0717 10:37:33.084163    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:37:33.084229    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:37:33.103388    3636 provision.go:87] duration metric: took 151.262739ms to configureAuth
	I0717 10:37:33.103401    3636 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:37:33.103573    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:33.103587    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:33.103711    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:33.103809    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:33.103896    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.103977    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.104077    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:33.104181    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:33.104316    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:33.104324    3636 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:37:33.156434    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:37:33.156448    3636 buildroot.go:70] root file system type: tmpfs
	I0717 10:37:33.156525    3636 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:37:33.156537    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:33.156662    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:33.156743    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.156842    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.156931    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:33.157047    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:33.157186    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:33.157233    3636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:37:33.218680    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:37:33.218702    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:33.218866    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:33.218955    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.219056    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.219143    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:33.219283    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:33.219430    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:33.219443    3636 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:37:34.829521    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:37:34.829537    3636 machine.go:97] duration metric: took 13.059920588s to provisionDockerMachine
	I0717 10:37:34.829550    3636 start.go:293] postStartSetup for "ha-572000" (driver="hyperkit")
	I0717 10:37:34.829558    3636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:37:34.829569    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:34.829747    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:37:34.829763    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:34.829864    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:34.829977    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:34.830076    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:34.830154    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:34.863781    3636 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:37:34.867753    3636 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:37:34.867768    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:37:34.867875    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:37:34.868074    3636 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:37:34.868081    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:37:34.868294    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:37:34.881801    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:37:34.912172    3636 start.go:296] duration metric: took 82.609841ms for postStartSetup
	I0717 10:37:34.912193    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:34.912376    3636 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:37:34.912397    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:34.912490    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:34.912588    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:34.912689    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:34.912778    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:34.946140    3636 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:37:34.946199    3636 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:37:34.999470    3636 fix.go:56] duration metric: took 13.421948957s for fixHost
	I0717 10:37:34.999494    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:34.999648    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:34.999748    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:34.999850    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:34.999944    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:35.000069    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:35.000221    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:35.000229    3636 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:37:35.051085    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237854.922867132
	
	I0717 10:37:35.051099    3636 fix.go:216] guest clock: 1721237854.922867132
	I0717 10:37:35.051112    3636 fix.go:229] Guest: 2024-07-17 10:37:34.922867132 -0700 PDT Remote: 2024-07-17 10:37:34.999482 -0700 PDT m=+13.873438456 (delta=-76.614868ms)
	I0717 10:37:35.051130    3636 fix.go:200] guest clock delta is within tolerance: -76.614868ms
	I0717 10:37:35.051134    3636 start.go:83] releasing machines lock for "ha-572000", held for 13.473647062s
	I0717 10:37:35.051154    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:35.051301    3636 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:37:35.051418    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:35.051739    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:35.051853    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:35.051967    3636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:37:35.051989    3636 ssh_runner.go:195] Run: cat /version.json
	I0717 10:37:35.051998    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:35.052000    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:35.052101    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:35.052120    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:35.052207    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:35.052223    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:35.052289    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:35.052308    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:35.052381    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:35.052403    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:35.080899    3636 ssh_runner.go:195] Run: systemctl --version
	I0717 10:37:35.132487    3636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 10:37:35.137302    3636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:37:35.137349    3636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:37:35.150408    3636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:37:35.150420    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:37:35.150523    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:37:35.166824    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:37:35.175726    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:37:35.184531    3636 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:37:35.184576    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:37:35.193352    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:37:35.202047    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:37:35.210925    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:37:35.219775    3636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:37:35.228824    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:37:35.237746    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:37:35.246520    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:37:35.255409    3636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:37:35.263547    3636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:37:35.271637    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:35.370819    3636 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:37:35.385762    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:37:35.385839    3636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:37:35.397460    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:37:35.408605    3636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:37:35.423025    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:37:35.433954    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:37:35.444983    3636 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:37:35.462789    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:37:35.474320    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:37:35.491905    3636 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:37:35.494848    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:37:35.502963    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:37:35.516602    3636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:37:35.626759    3636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:37:35.732422    3636 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:37:35.732511    3636 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:37:35.746415    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:35.837452    3636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:37:38.134243    3636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.296714656s)
	I0717 10:37:38.134309    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:37:38.145497    3636 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 10:37:38.159451    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:37:38.170560    3636 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:37:38.274400    3636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:37:38.385610    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:38.490247    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:37:38.502358    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:37:38.513179    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:38.610828    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:37:38.675050    3636 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:37:38.675129    3636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:37:38.679555    3636 start.go:563] Will wait 60s for crictl version
	I0717 10:37:38.679605    3636 ssh_runner.go:195] Run: which crictl
	I0717 10:37:38.682545    3636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:37:38.707789    3636 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:37:38.707873    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:37:38.724822    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:37:38.769236    3636 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:37:38.769287    3636 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:37:38.769657    3636 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:37:38.774296    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:37:38.784075    3636 kubeadm.go:883] updating cluster {Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 10:37:38.784175    3636 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:37:38.784231    3636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 10:37:38.798317    3636 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0717 10:37:38.798329    3636 docker.go:615] Images already preloaded, skipping extraction
	I0717 10:37:38.798398    3636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 10:37:38.810938    3636 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0717 10:37:38.810957    3636 cache_images.go:84] Images are preloaded, skipping loading
	I0717 10:37:38.810966    3636 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.30.2 docker true true} ...
	I0717 10:37:38.811048    3636 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:37:38.811115    3636 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 10:37:38.829256    3636 cni.go:84] Creating CNI manager for ""
	I0717 10:37:38.829269    3636 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 10:37:38.829280    3636 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 10:37:38.829295    3636 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-572000 NodeName:ha-572000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 10:37:38.829373    3636 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-572000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 10:37:38.829387    3636 kube-vip.go:115] generating kube-vip config ...
	I0717 10:37:38.829437    3636 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 10:37:38.842048    3636 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 10:37:38.842112    3636 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 10:37:38.842157    3636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:37:38.849945    3636 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:37:38.849994    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 10:37:38.857243    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0717 10:37:38.870596    3636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:37:38.883936    3636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0717 10:37:38.897367    3636 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0717 10:37:38.910809    3636 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0717 10:37:38.913705    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:37:38.922873    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:39.030583    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:37:39.043433    3636 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.5
	I0717 10:37:39.043445    3636 certs.go:194] generating shared ca certs ...
	I0717 10:37:39.043456    3636 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:37:39.043642    3636 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:37:39.043720    3636 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:37:39.043730    3636 certs.go:256] generating profile certs ...
	I0717 10:37:39.043839    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key
	I0717 10:37:39.043918    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7
	I0717 10:37:39.043992    3636 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key
	I0717 10:37:39.043999    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:37:39.044021    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:37:39.044039    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:37:39.044057    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:37:39.044074    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 10:37:39.044104    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 10:37:39.044133    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 10:37:39.044152    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 10:37:39.044248    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:37:39.044296    3636 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:37:39.044310    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:37:39.044353    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:37:39.044397    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:37:39.044448    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:37:39.044541    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:37:39.044586    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:39.044607    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:37:39.044626    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:37:39.045107    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:37:39.076893    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:37:39.102499    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:37:39.129749    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:37:39.155627    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 10:37:39.180179    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 10:37:39.210181    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 10:37:39.264808    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 10:37:39.318806    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:37:39.365954    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:37:39.390620    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:37:39.410051    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 10:37:39.423408    3636 ssh_runner.go:195] Run: openssl version
	I0717 10:37:39.427605    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:37:39.436575    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:39.439804    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:39.439837    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:39.443971    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:37:39.452794    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:37:39.461667    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:37:39.464961    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:37:39.465002    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:37:39.469065    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:37:39.477903    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:37:39.486816    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:37:39.490121    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:37:39.490162    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:37:39.494244    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:37:39.503378    3636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:37:39.506714    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 10:37:39.510953    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 10:37:39.515092    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 10:37:39.519272    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 10:37:39.523407    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 10:37:39.527554    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 10:37:39.531780    3636 kubeadm.go:392] StartCluster: {Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:37:39.531904    3636 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 10:37:39.544965    3636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 10:37:39.553126    3636 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 10:37:39.553138    3636 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 10:37:39.553178    3636 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 10:37:39.561206    3636 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 10:37:39.561518    3636 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-572000" does not appear in /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:37:39.561607    3636 kubeconfig.go:62] /Users/jenkins/minikube-integration/19283-1099/kubeconfig needs updating (will repair): [kubeconfig missing "ha-572000" cluster setting kubeconfig missing "ha-572000" context setting]
	I0717 10:37:39.561822    3636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:37:39.562469    3636 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:37:39.562674    3636 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa0a8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 10:37:39.562998    3636 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 10:37:39.563178    3636 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 10:37:39.570855    3636 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0717 10:37:39.570867    3636 kubeadm.go:597] duration metric: took 17.724744ms to restartPrimaryControlPlane
	I0717 10:37:39.570878    3636 kubeadm.go:394] duration metric: took 39.101036ms to StartCluster
	I0717 10:37:39.570889    3636 settings.go:142] acquiring lock: {Name:mkc45f011a907c66e2dbca7dadfff37ab48f7d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:37:39.570961    3636 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:37:39.571333    3636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:37:39.571564    3636 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:37:39.571579    3636 start.go:241] waiting for startup goroutines ...
	I0717 10:37:39.571583    3636 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 10:37:39.571709    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:39.622273    3636 out.go:177] * Enabled addons: 
	I0717 10:37:39.644517    3636 addons.go:510] duration metric: took 72.937257ms for enable addons: enabled=[]
	I0717 10:37:39.644554    3636 start.go:246] waiting for cluster config update ...
	I0717 10:37:39.644589    3636 start.go:255] writing updated cluster config ...
	I0717 10:37:39.667630    3636 out.go:177] 
	I0717 10:37:39.689827    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:39.689958    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:37:39.712261    3636 out.go:177] * Starting "ha-572000-m02" control-plane node in "ha-572000" cluster
	I0717 10:37:39.754151    3636 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:37:39.754211    3636 cache.go:56] Caching tarball of preloaded images
	I0717 10:37:39.754408    3636 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:37:39.754427    3636 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:37:39.754564    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:37:39.755532    3636 start.go:360] acquireMachinesLock for ha-572000-m02: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:37:39.755656    3636 start.go:364] duration metric: took 98.999µs to acquireMachinesLock for "ha-572000-m02"
	I0717 10:37:39.755680    3636 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:37:39.755687    3636 fix.go:54] fixHost starting: m02
	I0717 10:37:39.756121    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:39.756167    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:39.765321    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51958
	I0717 10:37:39.765669    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:39.765987    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:39.765996    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:39.766231    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:39.766367    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:39.766465    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetState
	I0717 10:37:39.766561    3636 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:39.766639    3636 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3526
	I0717 10:37:39.767558    3636 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3526 missing from process table
	I0717 10:37:39.767584    3636 fix.go:112] recreateIfNeeded on ha-572000-m02: state=Stopped err=<nil>
	I0717 10:37:39.767592    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	W0717 10:37:39.767681    3636 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:37:39.811253    3636 out.go:177] * Restarting existing hyperkit VM for "ha-572000-m02" ...
	I0717 10:37:39.832179    3636 main.go:141] libmachine: (ha-572000-m02) Calling .Start
	I0717 10:37:39.832337    3636 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:39.832362    3636 main.go:141] libmachine: (ha-572000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid
	I0717 10:37:39.833334    3636 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3526 missing from process table
	I0717 10:37:39.833343    3636 main.go:141] libmachine: (ha-572000-m02) DBG | pid 3526 is in state "Stopped"
	I0717 10:37:39.833355    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid...
	I0717 10:37:39.833536    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Using UUID b5da5881-83da-4916-aec8-9a96c30c8c05
	I0717 10:37:39.859749    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Generated MAC 2:60:33:0:68:8b
	I0717 10:37:39.859777    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:37:39.859978    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b5da5881-83da-4916-aec8-9a96c30c8c05", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c0ae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:37:39.860020    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b5da5881-83da-4916-aec8-9a96c30c8c05", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c0ae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:37:39.860096    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b5da5881-83da-4916-aec8-9a96c30c8c05", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/ha-572000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machine
s/ha-572000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:37:39.860169    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b5da5881-83da-4916-aec8-9a96c30c8c05 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/ha-572000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:37:39.860189    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:37:39.861788    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: Pid is 3657
	I0717 10:37:39.862251    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Attempt 0
	I0717 10:37:39.862268    3636 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:39.862355    3636 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3657
	I0717 10:37:39.864079    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Searching for 2:60:33:0:68:8b in /var/db/dhcpd_leases ...
	I0717 10:37:39.864121    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:37:39.864142    3636 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669952da}
	I0717 10:37:39.864158    3636 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x669951d0}
	I0717 10:37:39.864182    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Found match: 2:60:33:0:68:8b
	I0717 10:37:39.864197    3636 main.go:141] libmachine: (ha-572000-m02) DBG | IP: 192.169.0.6
	I0717 10:37:39.864229    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetConfigRaw
	I0717 10:37:39.865013    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:37:39.865242    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:37:39.865841    3636 machine.go:94] provisionDockerMachine start ...
	I0717 10:37:39.865853    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:39.866023    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:39.866160    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:39.866271    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:39.866402    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:39.866505    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:39.866622    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:39.866842    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:39.866854    3636 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:37:39.869683    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:37:39.878483    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:37:39.879603    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:37:39.879617    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:37:39.879624    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:37:39.879629    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:37:40.255889    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:37:40.255907    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:37:40.370491    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:37:40.370510    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:37:40.370520    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:37:40.370527    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:37:40.371371    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:37:40.371379    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:37:45.614184    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:45 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:37:45.614198    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:45 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:37:45.614209    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:45 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:37:45.638128    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:45 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:37:50.925250    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:37:50.925264    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:37:50.925388    3636 buildroot.go:166] provisioning hostname "ha-572000-m02"
	I0717 10:37:50.925396    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:37:50.925487    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:50.925569    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:50.925664    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:50.925753    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:50.925857    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:50.925992    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:50.926145    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:50.926154    3636 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000-m02 && echo "ha-572000-m02" | sudo tee /etc/hostname
	I0717 10:37:50.991059    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000-m02
	
	I0717 10:37:50.991079    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:50.991219    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:50.991316    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:50.991401    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:50.991492    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:50.991638    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:50.991791    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:50.991803    3636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:37:51.051090    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:37:51.051108    3636 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:37:51.051119    3636 buildroot.go:174] setting up certificates
	I0717 10:37:51.051126    3636 provision.go:84] configureAuth start
	I0717 10:37:51.051132    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:37:51.051276    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:37:51.051370    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:51.051458    3636 provision.go:143] copyHostCerts
	I0717 10:37:51.051492    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:37:51.051538    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:37:51.051544    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:37:51.051674    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:37:51.051883    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:37:51.051914    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:37:51.051919    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:37:51.052017    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:37:51.052173    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:37:51.052202    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:37:51.052207    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:37:51.052377    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:37:51.052529    3636 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000-m02 san=[127.0.0.1 192.169.0.6 ha-572000-m02 localhost minikube]
	I0717 10:37:51.118183    3636 provision.go:177] copyRemoteCerts
	I0717 10:37:51.118227    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:37:51.118240    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:51.118378    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:51.118485    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.118583    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:51.118673    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:37:51.152061    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:37:51.152130    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:37:51.171745    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:37:51.171819    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 10:37:51.192673    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:37:51.192744    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:37:51.212788    3636 provision.go:87] duration metric: took 161.649391ms to configureAuth
	I0717 10:37:51.212802    3636 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:37:51.212965    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:51.212978    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:51.213112    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:51.213224    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:51.213316    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.213411    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.213499    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:51.213614    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:51.213748    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:51.213755    3636 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:37:51.269367    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:37:51.269384    3636 buildroot.go:70] root file system type: tmpfs
	I0717 10:37:51.269468    3636 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:37:51.269484    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:51.269663    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:51.269800    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.269888    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.269973    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:51.270120    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:51.270267    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:51.270313    3636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:37:51.334311    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:37:51.334330    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:51.334460    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:51.334550    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.334644    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.334739    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:51.334864    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:51.335013    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:51.335026    3636 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:37:52.973251    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:37:52.973265    3636 machine.go:97] duration metric: took 13.107082478s to provisionDockerMachine
	I0717 10:37:52.973273    3636 start.go:293] postStartSetup for "ha-572000-m02" (driver="hyperkit")
	I0717 10:37:52.973280    3636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:37:52.973291    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:52.973486    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:37:52.973497    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:52.973604    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:52.973699    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:52.973791    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:52.973882    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:37:53.016888    3636 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:37:53.020683    3636 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:37:53.020693    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:37:53.020793    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:37:53.020968    3636 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:37:53.020974    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:37:53.021167    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:37:53.029813    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:37:53.057224    3636 start.go:296] duration metric: took 83.939886ms for postStartSetup
	I0717 10:37:53.057245    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:53.057420    3636 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:37:53.057442    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:53.057549    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:53.057634    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:53.057729    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:53.057811    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:37:53.091296    3636 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:37:53.091355    3636 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:37:53.145297    3636 fix.go:56] duration metric: took 13.389268028s for fixHost
	I0717 10:37:53.145323    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:53.145457    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:53.145570    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:53.145662    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:53.145747    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:53.145888    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:53.146033    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:53.146041    3636 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:37:53.200266    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237873.035451058
	
	I0717 10:37:53.200279    3636 fix.go:216] guest clock: 1721237873.035451058
	I0717 10:37:53.200284    3636 fix.go:229] Guest: 2024-07-17 10:37:53.035451058 -0700 PDT Remote: 2024-07-17 10:37:53.145313 -0700 PDT m=+32.018809214 (delta=-109.861942ms)
	I0717 10:37:53.200294    3636 fix.go:200] guest clock delta is within tolerance: -109.861942ms
	I0717 10:37:53.200298    3636 start.go:83] releasing machines lock for "ha-572000-m02", held for 13.44429115s
	I0717 10:37:53.200315    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:53.200436    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:37:53.222208    3636 out.go:177] * Found network options:
	I0717 10:37:53.243791    3636 out.go:177]   - NO_PROXY=192.169.0.5
	W0717 10:37:53.264601    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:37:53.264624    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:53.265081    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:53.265198    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:53.265269    3636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:37:53.265297    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	W0717 10:37:53.265332    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:37:53.265384    3636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:37:53.265387    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:53.265394    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:53.265518    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:53.265536    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:53.265639    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:53.265670    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:53.265728    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:37:53.265789    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:53.265871    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	W0717 10:37:53.294993    3636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:37:53.295059    3636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:37:53.339897    3636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:37:53.339919    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:37:53.340039    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:37:53.356231    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:37:53.365203    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:37:53.374127    3636 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:37:53.374184    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:37:53.382910    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:37:53.391778    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:37:53.400635    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:37:53.409795    3636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:37:53.418780    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:37:53.427594    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:37:53.436364    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:37:53.445437    3636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:37:53.453621    3636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:37:53.461634    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:53.558529    3636 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:37:53.577286    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:37:53.577360    3636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:37:53.591736    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:37:53.603521    3636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:37:53.618503    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:37:53.629064    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:37:53.639359    3636 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:37:53.658160    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:37:53.668814    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:37:53.683643    3636 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:37:53.686618    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:37:53.693926    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:37:53.707525    3636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:37:53.805691    3636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:37:53.920383    3636 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:37:53.920404    3636 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:37:53.934506    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:54.030259    3636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:37:56.344867    3636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.314525686s)
	I0717 10:37:56.344926    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:37:56.355390    3636 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 10:37:56.369820    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:37:56.380473    3636 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:37:56.479810    3636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:37:56.576860    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:56.671071    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:37:56.685037    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:37:56.696333    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:56.796692    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:37:56.861896    3636 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:37:56.861969    3636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:37:56.866672    3636 start.go:563] Will wait 60s for crictl version
	I0717 10:37:56.866724    3636 ssh_runner.go:195] Run: which crictl
	I0717 10:37:56.869877    3636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:37:56.896141    3636 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:37:56.896217    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:37:56.915592    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:37:56.953839    3636 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:37:56.975427    3636 out.go:177]   - env NO_PROXY=192.169.0.5
	I0717 10:37:56.996201    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:37:56.996608    3636 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:37:57.001171    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:37:57.011676    3636 mustload.go:65] Loading cluster: ha-572000
	I0717 10:37:57.011852    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:57.012113    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:57.012134    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:57.020969    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51980
	I0717 10:37:57.021367    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:57.021710    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:57.021724    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:57.021923    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:57.022051    3636 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:37:57.022138    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:57.022223    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3650
	I0717 10:37:57.023174    3636 host.go:66] Checking if "ha-572000" exists ...
	I0717 10:37:57.023426    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:57.023448    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:57.032019    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51982
	I0717 10:37:57.032378    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:57.032733    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:57.032749    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:57.032974    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:57.033082    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:57.033182    3636 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.6
	I0717 10:37:57.033189    3636 certs.go:194] generating shared ca certs ...
	I0717 10:37:57.033198    3636 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:37:57.033338    3636 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:37:57.033394    3636 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:37:57.033402    3636 certs.go:256] generating profile certs ...
	I0717 10:37:57.033489    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key
	I0717 10:37:57.033573    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.060f3240
	I0717 10:37:57.033624    3636 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key
	I0717 10:37:57.033631    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:37:57.033652    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:37:57.033672    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:37:57.033691    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:37:57.033708    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 10:37:57.033726    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 10:37:57.033744    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 10:37:57.033762    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 10:37:57.033840    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:37:57.033893    3636 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:37:57.033902    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:37:57.033938    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:37:57.033978    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:37:57.034008    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:37:57.034074    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:37:57.034108    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:37:57.034128    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:37:57.034146    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:57.034178    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:57.034270    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:57.034368    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:57.034458    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:57.034541    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:57.060171    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 10:37:57.063698    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 10:37:57.072274    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 10:37:57.075754    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0717 10:37:57.084043    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 10:37:57.087057    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 10:37:57.095232    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 10:37:57.098576    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0717 10:37:57.107451    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 10:37:57.110444    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 10:37:57.118613    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 10:37:57.121532    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 10:37:57.130217    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:37:57.149961    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:37:57.168914    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:37:57.188002    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:37:57.207206    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 10:37:57.226812    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 10:37:57.246070    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 10:37:57.265450    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 10:37:57.284420    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:37:57.303511    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:37:57.322687    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:37:57.341613    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 10:37:57.355190    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0717 10:37:57.368847    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 10:37:57.382513    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0717 10:37:57.395989    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 10:37:57.409357    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 10:37:57.423052    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 10:37:57.436932    3636 ssh_runner.go:195] Run: openssl version
	I0717 10:37:57.441057    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:37:57.450112    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:37:57.453386    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:37:57.453428    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:37:57.457514    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:37:57.466394    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:37:57.475362    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:37:57.478777    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:37:57.478819    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:37:57.482919    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:37:57.491931    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:37:57.500785    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:57.504034    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:57.504067    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:57.508244    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:37:57.517376    3636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:37:57.520713    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 10:37:57.524959    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 10:37:57.529259    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 10:37:57.533468    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 10:37:57.537834    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 10:37:57.542026    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 10:37:57.546248    3636 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.30.2 docker true true} ...
	I0717 10:37:57.546318    3636 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:37:57.546337    3636 kube-vip.go:115] generating kube-vip config ...
	I0717 10:37:57.546371    3636 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 10:37:57.559423    3636 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 10:37:57.559466    3636 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 10:37:57.559520    3636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:37:57.567774    3636 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:37:57.567817    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 10:37:57.575763    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0717 10:37:57.589137    3636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:37:57.602430    3636 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0717 10:37:57.616134    3636 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0717 10:37:57.619036    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:37:57.629004    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:57.726717    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:37:57.741206    3636 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:37:57.741389    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:57.762661    3636 out.go:177] * Verifying Kubernetes components...
	I0717 10:37:57.804314    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:57.930654    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:37:57.959022    3636 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:37:57.959251    3636 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa0a8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 10:37:57.959292    3636 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0717 10:37:57.959472    3636 node_ready.go:35] waiting up to 6m0s for node "ha-572000-m02" to be "Ready" ...
	I0717 10:37:57.959551    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:37:57.959557    3636 round_trippers.go:469] Request Headers:
	I0717 10:37:57.959564    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:37:57.959567    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.587526    3636 round_trippers.go:574] Response Status: 200 OK in 8627 milliseconds
	I0717 10:38:06.588080    3636 node_ready.go:49] node "ha-572000-m02" has status "Ready":"True"
	I0717 10:38:06.588093    3636 node_ready.go:38] duration metric: took 8.628386286s for node "ha-572000-m02" to be "Ready" ...
	I0717 10:38:06.588101    3636 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:38:06.588149    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:06.588155    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.588161    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.588168    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.624239    3636 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I0717 10:38:06.633134    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.633193    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:06.633198    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.633204    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.633210    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.642331    3636 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 10:38:06.642741    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:06.642749    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.642756    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.642759    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.645958    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:06.646753    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:06.646763    3636 pod_ready.go:81] duration metric: took 13.611341ms for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.646771    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.646808    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9dzd5
	I0717 10:38:06.646813    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.646818    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.646822    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.650165    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:06.650520    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:06.650527    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.650533    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.650538    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.652506    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:06.652830    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:06.652839    3636 pod_ready.go:81] duration metric: took 6.063342ms for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.652846    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.652883    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000
	I0717 10:38:06.652888    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.652894    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.652897    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.688343    3636 round_trippers.go:574] Response Status: 200 OK in 35 milliseconds
	I0717 10:38:06.688830    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:06.688842    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.688852    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.688855    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.691433    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:06.691756    3636 pod_ready.go:92] pod "etcd-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:06.691766    3636 pod_ready.go:81] duration metric: took 38.913354ms for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.691776    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.691822    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m02
	I0717 10:38:06.691828    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.691835    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.691841    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.722915    3636 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0717 10:38:06.723291    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:06.723298    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.723304    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.723309    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.762595    3636 round_trippers.go:574] Response Status: 200 OK in 39 milliseconds
	I0717 10:38:06.763038    3636 pod_ready.go:92] pod "etcd-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:06.763050    3636 pod_ready.go:81] duration metric: took 71.265447ms for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.763057    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.763098    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m03
	I0717 10:38:06.763103    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.763109    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.763112    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.766379    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:06.788728    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:06.788744    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.788750    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.788754    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.790975    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:06.791292    3636 pod_ready.go:92] pod "etcd-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:06.791302    3636 pod_ready.go:81] duration metric: took 28.239348ms for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.791319    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.988792    3636 request.go:629] Waited for 197.413405ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:38:06.988885    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:38:06.988891    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.988897    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.988903    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.991048    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:07.189095    3636 request.go:629] Waited for 197.524443ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:07.189132    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:07.189138    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:07.189146    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:07.189196    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:07.191472    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:07.191816    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:07.191825    3636 pod_ready.go:81] duration metric: took 400.490534ms for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:07.191832    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:07.388673    3636 request.go:629] Waited for 196.768491ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:38:07.388712    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:38:07.388717    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:07.388723    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:07.388726    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:07.390742    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:07.589477    3636 request.go:629] Waited for 198.180735ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:07.589514    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:07.589519    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:07.589526    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:07.589532    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:07.593904    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:38:07.594274    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:07.594283    3636 pod_ready.go:81] duration metric: took 402.436695ms for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:07.594290    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:07.789046    3636 request.go:629] Waited for 194.715768ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:38:07.789116    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:38:07.789122    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:07.789128    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:07.789134    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:07.791498    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:07.988262    3636 request.go:629] Waited for 196.319765ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:07.988331    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:07.988337    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:07.988344    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:07.988349    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:07.990665    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:07.990933    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:07.990943    3636 pod_ready.go:81] duration metric: took 396.637435ms for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:07.990949    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:08.189888    3636 request.go:629] Waited for 198.896315ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:38:08.189961    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:38:08.189968    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:08.189977    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:08.189982    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:08.192640    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:08.388942    3636 request.go:629] Waited for 195.85351ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:08.388998    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:08.389006    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:08.389019    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:08.389035    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:08.392574    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:08.392939    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:08.392951    3636 pod_ready.go:81] duration metric: took 401.985681ms for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:08.392963    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:08.589323    3636 request.go:629] Waited for 196.303012ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:38:08.589449    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:38:08.589461    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:08.589473    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:08.589481    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:08.592867    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:08.788589    3636 request.go:629] Waited for 195.011915ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:08.788634    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:08.788643    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:08.788654    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:08.788663    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:08.791468    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:08.791995    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:08.792019    3636 pod_ready.go:81] duration metric: took 399.039947ms for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:08.792032    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:08.990174    3636 request.go:629] Waited for 198.086662ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:38:08.990290    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:38:08.990300    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:08.990310    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:08.990317    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:08.993459    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:09.189555    3636 request.go:629] Waited for 195.556708ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:09.189686    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:09.189699    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:09.189710    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:09.189717    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:09.193157    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:09.193504    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:09.193518    3636 pod_ready.go:81] duration metric: took 401.469313ms for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:09.193543    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:09.389705    3636 request.go:629] Waited for 196.104363ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:38:09.389843    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:38:09.389855    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:09.389866    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:09.389872    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:09.393695    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:09.588443    3636 request.go:629] Waited for 194.213728ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:38:09.588571    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:38:09.588582    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:09.588591    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:09.588614    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:09.591794    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:09.592120    3636 pod_ready.go:92] pod "kube-proxy-5wcph" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:09.592130    3636 pod_ready.go:81] duration metric: took 398.566071ms for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:09.592136    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:09.789810    3636 request.go:629] Waited for 197.599858ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:38:09.789932    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:38:09.789953    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:09.789967    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:09.789977    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:09.793548    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:09.990128    3636 request.go:629] Waited for 195.990226ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:09.990259    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:09.990271    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:09.990282    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:09.990289    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:09.994401    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:38:09.995074    3636 pod_ready.go:92] pod "kube-proxy-h7k9z" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:09.995084    3636 pod_ready.go:81] duration metric: took 402.932164ms for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:09.995091    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:10.188412    3636 request.go:629] Waited for 193.228723ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:38:10.188460    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:38:10.188468    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:10.188479    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:10.188487    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:10.192053    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:10.389379    3636 request.go:629] Waited for 196.635202ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:10.389554    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:10.389574    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:10.389589    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:10.389599    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:10.393541    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:10.393889    3636 pod_ready.go:92] pod "kube-proxy-hst7h" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:10.393900    3636 pod_ready.go:81] duration metric: took 398.793558ms for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:10.393912    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:10.589752    3636 request.go:629] Waited for 195.757616ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:38:10.589810    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:38:10.589821    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:10.589833    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:10.589842    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:10.593161    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:10.789574    3636 request.go:629] Waited for 195.972483ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:10.789649    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:10.789655    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:10.789661    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:10.789665    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:10.792056    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:10.792456    3636 pod_ready.go:92] pod "kube-proxy-v6jxh" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:10.792465    3636 pod_ready.go:81] duration metric: took 398.537807ms for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:10.792472    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:10.990155    3636 request.go:629] Waited for 197.636631ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:38:10.990304    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:38:10.990316    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:10.990327    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:10.990333    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:10.993508    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:11.188937    3636 request.go:629] Waited for 194.57393ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:11.188967    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:11.188973    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:11.188979    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:11.188983    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:11.190738    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:11.191134    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:11.191144    3636 pod_ready.go:81] duration metric: took 398.656979ms for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:11.191150    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:11.388866    3636 request.go:629] Waited for 197.675969ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:38:11.388927    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:38:11.388931    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:11.388937    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:11.388941    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:11.390887    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:11.589661    3636 request.go:629] Waited for 198.35169ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:11.589745    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:11.589751    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:11.589759    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:11.589764    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:11.591880    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:11.592231    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:11.592240    3636 pod_ready.go:81] duration metric: took 401.075331ms for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:11.592247    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:11.790368    3636 request.go:629] Waited for 198.069219ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:38:11.790468    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:38:11.790479    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:11.790491    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:11.790498    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:11.793691    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:11.988391    3636 request.go:629] Waited for 194.130009ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:11.988514    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:11.988524    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:11.988535    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:11.988543    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:11.991587    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:11.991946    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:11.991960    3636 pod_ready.go:81] duration metric: took 399.692083ms for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:11.991969    3636 pod_ready.go:38] duration metric: took 5.403719656s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:38:11.991988    3636 api_server.go:52] waiting for apiserver process to appear ...
	I0717 10:38:11.992040    3636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:38:12.003855    3636 api_server.go:72] duration metric: took 14.26226374s to wait for apiserver process to appear ...
	I0717 10:38:12.003867    3636 api_server.go:88] waiting for apiserver healthz status ...
	I0717 10:38:12.003882    3636 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0717 10:38:12.008423    3636 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0717 10:38:12.008465    3636 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0717 10:38:12.008471    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:12.008478    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:12.008481    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:12.009101    3636 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:38:12.009162    3636 api_server.go:141] control plane version: v1.30.2
	I0717 10:38:12.009171    3636 api_server.go:131] duration metric: took 5.299116ms to wait for apiserver health ...
	I0717 10:38:12.009178    3636 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 10:38:12.189013    3636 request.go:629] Waited for 179.768156ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:12.189094    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:12.189102    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:12.189111    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:12.189116    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:12.194083    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:38:12.199463    3636 system_pods.go:59] 26 kube-system pods found
	I0717 10:38:12.199478    3636 system_pods.go:61] "coredns-7db6d8ff4d-2phrp" [e76a91cf-8a14-454c-baee-7c0128645e0e] Running
	I0717 10:38:12.199495    3636 system_pods.go:61] "coredns-7db6d8ff4d-9dzd5" [f9a7cff1-56a3-4600-ae2f-a951dda10753] Running
	I0717 10:38:12.199501    3636 system_pods.go:61] "etcd-ha-572000" [2d7b717e-d404-4c63-afe9-799de6711964] Running
	I0717 10:38:12.199505    3636 system_pods.go:61] "etcd-ha-572000-m02" [8abc7662-c159-4953-9aba-11a75b4e7d65] Running
	I0717 10:38:12.199509    3636 system_pods.go:61] "etcd-ha-572000-m03" [78629908-d362-4a27-933f-2b929867c22f] Running
	I0717 10:38:12.199518    3636 system_pods.go:61] "kindnet-5xsrp" [0b84aa4d-7452-47f6-8052-f063e2e8ef7b] Running
	I0717 10:38:12.199521    3636 system_pods.go:61] "kindnet-72zfp" [0cc735c4-08b3-499a-b9e2-8c38377f371b] Running
	I0717 10:38:12.199524    3636 system_pods.go:61] "kindnet-g2m92" [be3d84cf-f9e8-426c-b02a-49eb7eed9d6a] Running
	I0717 10:38:12.199526    3636 system_pods.go:61] "kindnet-t85bv" [8649cc70-8ce8-4caf-938e-bd253fa5b7ae] Running
	I0717 10:38:12.199530    3636 system_pods.go:61] "kube-apiserver-ha-572000" [6409591d-7a50-414b-be63-44d5ddb0b0e0] Running
	I0717 10:38:12.199532    3636 system_pods.go:61] "kube-apiserver-ha-572000-m02" [d658459c-67f9-4798-87f2-c651d830af35] Running
	I0717 10:38:12.199535    3636 system_pods.go:61] "kube-apiserver-ha-572000-m03" [ac1c02f1-f023-4ad2-99bc-2657fb9d50d7] Running
	I0717 10:38:12.199538    3636 system_pods.go:61] "kube-controller-manager-ha-572000" [075c4d23-04bd-404a-822d-ae7d326a68ac] Running
	I0717 10:38:12.199541    3636 system_pods.go:61] "kube-controller-manager-ha-572000-m02" [3014bdb3-bb86-48d5-a045-2a8dab026bd8] Running
	I0717 10:38:12.199544    3636 system_pods.go:61] "kube-controller-manager-ha-572000-m03" [3b76e220-3a5b-4dac-8f69-6b380799d2ef] Running
	I0717 10:38:12.199546    3636 system_pods.go:61] "kube-proxy-5wcph" [731f5b57-131e-4e97-b47a-036b8d4edbcd] Running
	I0717 10:38:12.199553    3636 system_pods.go:61] "kube-proxy-h7k9z" [cf687084-b40d-43b6-9476-8c60c5f37d1d] Running
	I0717 10:38:12.199557    3636 system_pods.go:61] "kube-proxy-hst7h" [2b7c8d2c-3d71-4357-9429-1d8438c446f5] Running
	I0717 10:38:12.199559    3636 system_pods.go:61] "kube-proxy-v6jxh" [3f952fc8-747f-49da-b400-5212dba538d8] Running
	I0717 10:38:12.199565    3636 system_pods.go:61] "kube-scheduler-ha-572000" [da36a422-ba49-4cd6-8f47-9be7c43657be] Running
	I0717 10:38:12.199568    3636 system_pods.go:61] "kube-scheduler-ha-572000-m02" [13e289a2-8d3c-478f-9220-1d114dd5bf62] Running
	I0717 10:38:12.199571    3636 system_pods.go:61] "kube-scheduler-ha-572000-m03" [428716db-8096-4b57-9f90-e706d5683852] Running
	I0717 10:38:12.199573    3636 system_pods.go:61] "kube-vip-ha-572000" [289bb6df-c101-49d3-9e4a-6c0ecdd26551] Running
	I0717 10:38:12.199576    3636 system_pods.go:61] "kube-vip-ha-572000-m02" [fa57b8d2-bc41-42af-9d9a-1b68f882e6fe] Running
	I0717 10:38:12.199579    3636 system_pods.go:61] "kube-vip-ha-572000-m03" [a35c5007-dfe3-4b58-b16c-d568b8f9c1c2] Running
	I0717 10:38:12.199581    3636 system_pods.go:61] "storage-provisioner" [1801216d-6529-4049-b874-1577132fc03f] Running
	I0717 10:38:12.199585    3636 system_pods.go:74] duration metric: took 190.398086ms to wait for pod list to return data ...
	I0717 10:38:12.199592    3636 default_sa.go:34] waiting for default service account to be created ...
	I0717 10:38:12.388401    3636 request.go:629] Waited for 188.727547ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0717 10:38:12.388434    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0717 10:38:12.388439    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:12.388445    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:12.388449    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:12.390736    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:12.390877    3636 default_sa.go:45] found service account: "default"
	I0717 10:38:12.390886    3636 default_sa.go:55] duration metric: took 191.284842ms for default service account to be created ...
	I0717 10:38:12.390892    3636 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 10:38:12.588992    3636 request.go:629] Waited for 198.054942ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:12.589092    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:12.589101    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:12.589115    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:12.589123    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:12.595003    3636 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 10:38:12.599941    3636 system_pods.go:86] 26 kube-system pods found
	I0717 10:38:12.599953    3636 system_pods.go:89] "coredns-7db6d8ff4d-2phrp" [e76a91cf-8a14-454c-baee-7c0128645e0e] Running
	I0717 10:38:12.599962    3636 system_pods.go:89] "coredns-7db6d8ff4d-9dzd5" [f9a7cff1-56a3-4600-ae2f-a951dda10753] Running
	I0717 10:38:12.599966    3636 system_pods.go:89] "etcd-ha-572000" [2d7b717e-d404-4c63-afe9-799de6711964] Running
	I0717 10:38:12.599970    3636 system_pods.go:89] "etcd-ha-572000-m02" [8abc7662-c159-4953-9aba-11a75b4e7d65] Running
	I0717 10:38:12.599986    3636 system_pods.go:89] "etcd-ha-572000-m03" [78629908-d362-4a27-933f-2b929867c22f] Running
	I0717 10:38:12.599992    3636 system_pods.go:89] "kindnet-5xsrp" [0b84aa4d-7452-47f6-8052-f063e2e8ef7b] Running
	I0717 10:38:12.599996    3636 system_pods.go:89] "kindnet-72zfp" [0cc735c4-08b3-499a-b9e2-8c38377f371b] Running
	I0717 10:38:12.599999    3636 system_pods.go:89] "kindnet-g2m92" [be3d84cf-f9e8-426c-b02a-49eb7eed9d6a] Running
	I0717 10:38:12.600003    3636 system_pods.go:89] "kindnet-t85bv" [8649cc70-8ce8-4caf-938e-bd253fa5b7ae] Running
	I0717 10:38:12.600007    3636 system_pods.go:89] "kube-apiserver-ha-572000" [6409591d-7a50-414b-be63-44d5ddb0b0e0] Running
	I0717 10:38:12.600010    3636 system_pods.go:89] "kube-apiserver-ha-572000-m02" [d658459c-67f9-4798-87f2-c651d830af35] Running
	I0717 10:38:12.600014    3636 system_pods.go:89] "kube-apiserver-ha-572000-m03" [ac1c02f1-f023-4ad2-99bc-2657fb9d50d7] Running
	I0717 10:38:12.600018    3636 system_pods.go:89] "kube-controller-manager-ha-572000" [075c4d23-04bd-404a-822d-ae7d326a68ac] Running
	I0717 10:38:12.600021    3636 system_pods.go:89] "kube-controller-manager-ha-572000-m02" [3014bdb3-bb86-48d5-a045-2a8dab026bd8] Running
	I0717 10:38:12.600024    3636 system_pods.go:89] "kube-controller-manager-ha-572000-m03" [3b76e220-3a5b-4dac-8f69-6b380799d2ef] Running
	I0717 10:38:12.600028    3636 system_pods.go:89] "kube-proxy-5wcph" [731f5b57-131e-4e97-b47a-036b8d4edbcd] Running
	I0717 10:38:12.600031    3636 system_pods.go:89] "kube-proxy-h7k9z" [cf687084-b40d-43b6-9476-8c60c5f37d1d] Running
	I0717 10:38:12.600035    3636 system_pods.go:89] "kube-proxy-hst7h" [2b7c8d2c-3d71-4357-9429-1d8438c446f5] Running
	I0717 10:38:12.600038    3636 system_pods.go:89] "kube-proxy-v6jxh" [3f952fc8-747f-49da-b400-5212dba538d8] Running
	I0717 10:38:12.600041    3636 system_pods.go:89] "kube-scheduler-ha-572000" [da36a422-ba49-4cd6-8f47-9be7c43657be] Running
	I0717 10:38:12.600044    3636 system_pods.go:89] "kube-scheduler-ha-572000-m02" [13e289a2-8d3c-478f-9220-1d114dd5bf62] Running
	I0717 10:38:12.600048    3636 system_pods.go:89] "kube-scheduler-ha-572000-m03" [428716db-8096-4b57-9f90-e706d5683852] Running
	I0717 10:38:12.600051    3636 system_pods.go:89] "kube-vip-ha-572000" [289bb6df-c101-49d3-9e4a-6c0ecdd26551] Running
	I0717 10:38:12.600054    3636 system_pods.go:89] "kube-vip-ha-572000-m02" [fa57b8d2-bc41-42af-9d9a-1b68f882e6fe] Running
	I0717 10:38:12.600058    3636 system_pods.go:89] "kube-vip-ha-572000-m03" [a35c5007-dfe3-4b58-b16c-d568b8f9c1c2] Running
	I0717 10:38:12.600061    3636 system_pods.go:89] "storage-provisioner" [1801216d-6529-4049-b874-1577132fc03f] Running
	I0717 10:38:12.600065    3636 system_pods.go:126] duration metric: took 209.164597ms to wait for k8s-apps to be running ...
	I0717 10:38:12.600076    3636 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 10:38:12.600137    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:38:12.610524    3636 system_svc.go:56] duration metric: took 10.448568ms WaitForService to wait for kubelet
	I0717 10:38:12.610538    3636 kubeadm.go:582] duration metric: took 14.868933199s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:38:12.610564    3636 node_conditions.go:102] verifying NodePressure condition ...
	I0717 10:38:12.789306    3636 request.go:629] Waited for 178.678322ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0717 10:38:12.789427    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0717 10:38:12.789438    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:12.789448    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:12.789457    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:12.793007    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:12.794084    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:38:12.794097    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:38:12.794107    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:38:12.794110    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:38:12.794114    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:38:12.794122    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:38:12.794126    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:38:12.794129    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:38:12.794133    3636 node_conditions.go:105] duration metric: took 183.560156ms to run NodePressure ...
	I0717 10:38:12.794140    3636 start.go:241] waiting for startup goroutines ...
	I0717 10:38:12.794158    3636 start.go:255] writing updated cluster config ...
	I0717 10:38:12.815984    3636 out.go:177] 
	I0717 10:38:12.836616    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:38:12.836683    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:38:12.857448    3636 out.go:177] * Starting "ha-572000-m03" control-plane node in "ha-572000" cluster
	I0717 10:38:12.899463    3636 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:38:12.899506    3636 cache.go:56] Caching tarball of preloaded images
	I0717 10:38:12.899666    3636 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:38:12.899684    3636 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:38:12.899813    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:38:12.900669    3636 start.go:360] acquireMachinesLock for ha-572000-m03: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:38:12.900765    3636 start.go:364] duration metric: took 73.243µs to acquireMachinesLock for "ha-572000-m03"
	I0717 10:38:12.900790    3636 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:38:12.900816    3636 fix.go:54] fixHost starting: m03
	I0717 10:38:12.901158    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:38:12.901182    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:38:12.910100    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51987
	I0717 10:38:12.910428    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:38:12.910808    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:38:12.910824    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:38:12.911027    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:38:12.911151    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:12.911236    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetState
	I0717 10:38:12.911315    3636 main.go:141] libmachine: (ha-572000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:38:12.911405    3636 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid from json: 2972
	I0717 10:38:12.912336    3636 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid 2972 missing from process table
	I0717 10:38:12.912361    3636 fix.go:112] recreateIfNeeded on ha-572000-m03: state=Stopped err=<nil>
	I0717 10:38:12.912369    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	W0717 10:38:12.912452    3636 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:38:12.933536    3636 out.go:177] * Restarting existing hyperkit VM for "ha-572000-m03" ...
	I0717 10:38:12.975448    3636 main.go:141] libmachine: (ha-572000-m03) Calling .Start
	I0717 10:38:12.975666    3636 main.go:141] libmachine: (ha-572000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:38:12.975716    3636 main.go:141] libmachine: (ha-572000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/hyperkit.pid
	I0717 10:38:12.977484    3636 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid 2972 missing from process table
	I0717 10:38:12.977496    3636 main.go:141] libmachine: (ha-572000-m03) DBG | pid 2972 is in state "Stopped"
	I0717 10:38:12.977512    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/hyperkit.pid...
	I0717 10:38:12.977862    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Using UUID 5064fb5d-6e32-4be4-8d75-15b09204e5f5
	I0717 10:38:13.005572    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Generated MAC 6e:d3:62:da:43:cf
	I0717 10:38:13.005591    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:38:13.005736    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5064fb5d-6e32-4be4-8d75-15b09204e5f5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acc00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:38:13.005764    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5064fb5d-6e32-4be4-8d75-15b09204e5f5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acc00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:38:13.005828    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5064fb5d-6e32-4be4-8d75-15b09204e5f5", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/ha-572000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machine
s/ha-572000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:38:13.005888    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5064fb5d-6e32-4be4-8d75-15b09204e5f5 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/ha-572000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:38:13.005909    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:38:13.007252    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: Pid is 3665
	I0717 10:38:13.007703    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Attempt 0
	I0717 10:38:13.007718    3636 main.go:141] libmachine: (ha-572000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:38:13.007809    3636 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid from json: 3665
	I0717 10:38:13.009827    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Searching for 6e:d3:62:da:43:cf in /var/db/dhcpd_leases ...
	I0717 10:38:13.009874    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:38:13.009921    3636 main.go:141] libmachine: (ha-572000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x669952ec}
	I0717 10:38:13.009945    3636 main.go:141] libmachine: (ha-572000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669952da}
	I0717 10:38:13.009959    3636 main.go:141] libmachine: (ha-572000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:37:45:6a:f1:7f ID:1,1e:37:45:6a:f1:7f Lease:0x6698001b}
	I0717 10:38:13.009965    3636 main.go:141] libmachine: (ha-572000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:d3:62:da:43:cf ID:1,6e:d3:62:da:43:cf Lease:0x669950e4}
	I0717 10:38:13.009979    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetConfigRaw
	I0717 10:38:13.009982    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Found match: 6e:d3:62:da:43:cf
	I0717 10:38:13.009992    3636 main.go:141] libmachine: (ha-572000-m03) DBG | IP: 192.169.0.7
	I0717 10:38:13.010657    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetIP
	I0717 10:38:13.010834    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:38:13.011336    3636 machine.go:94] provisionDockerMachine start ...
	I0717 10:38:13.011346    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:13.011471    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:13.011562    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:13.011675    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:13.011768    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:13.011883    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:13.012034    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:13.012203    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:13.012211    3636 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:38:13.014976    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:38:13.023104    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:38:13.024110    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:38:13.024135    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:38:13.024157    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:38:13.024175    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:38:13.404157    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:38:13.404173    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:38:13.519656    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:38:13.519690    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:38:13.519727    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:38:13.519751    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:38:13.520524    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:38:13.520534    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:38:18.810258    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:18 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0717 10:38:18.810297    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:18 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0717 10:38:18.810307    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:18 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0717 10:38:18.834790    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:18 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0717 10:38:24.076646    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:38:24.076665    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetMachineName
	I0717 10:38:24.076790    3636 buildroot.go:166] provisioning hostname "ha-572000-m03"
	I0717 10:38:24.076802    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetMachineName
	I0717 10:38:24.076886    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.077024    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.077111    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.077196    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.077278    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.077404    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:24.077556    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:24.077565    3636 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000-m03 && echo "ha-572000-m03" | sudo tee /etc/hostname
	I0717 10:38:24.142857    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000-m03
	
	I0717 10:38:24.142872    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.143001    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.143104    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.143196    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.143280    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.143395    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:24.143539    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:24.143551    3636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:38:24.203331    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:38:24.203349    3636 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:38:24.203359    3636 buildroot.go:174] setting up certificates
	I0717 10:38:24.203364    3636 provision.go:84] configureAuth start
	I0717 10:38:24.203370    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetMachineName
	I0717 10:38:24.203518    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetIP
	I0717 10:38:24.203623    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.203721    3636 provision.go:143] copyHostCerts
	I0717 10:38:24.203751    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:38:24.203800    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:38:24.203806    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:38:24.203931    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:38:24.204144    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:38:24.204174    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:38:24.204179    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:38:24.204294    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:38:24.204463    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:38:24.204496    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:38:24.204500    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:38:24.204570    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:38:24.204726    3636 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000-m03 san=[127.0.0.1 192.169.0.7 ha-572000-m03 localhost minikube]
	I0717 10:38:24.389534    3636 provision.go:177] copyRemoteCerts
	I0717 10:38:24.389582    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:38:24.389597    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.389749    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.389840    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.389936    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.390018    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	I0717 10:38:24.424587    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:38:24.424660    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:38:24.444455    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:38:24.444522    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 10:38:24.465006    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:38:24.465071    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:38:24.485065    3636 provision.go:87] duration metric: took 281.685984ms to configureAuth
	I0717 10:38:24.485079    3636 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:38:24.485254    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:38:24.485268    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:24.485399    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.485509    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.485606    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.485695    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.485780    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.485889    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:24.486018    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:24.486026    3636 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:38:24.539772    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:38:24.539786    3636 buildroot.go:70] root file system type: tmpfs
	I0717 10:38:24.539874    3636 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:38:24.539885    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.540019    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.540102    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.540205    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.540313    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.540462    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:24.540607    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:24.540655    3636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:38:24.605074    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:38:24.605091    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.605230    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.605339    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.605424    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.605494    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.605620    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:24.605771    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:24.605784    3636 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:38:26.231394    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:38:26.231416    3636 machine.go:97] duration metric: took 13.21973714s to provisionDockerMachine
	I0717 10:38:26.231428    3636 start.go:293] postStartSetup for "ha-572000-m03" (driver="hyperkit")
	I0717 10:38:26.231437    3636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:38:26.231448    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.231633    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:38:26.231652    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:26.231764    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:26.231872    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.231959    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:26.232054    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	I0717 10:38:26.266647    3636 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:38:26.269791    3636 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:38:26.269801    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:38:26.269897    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:38:26.270060    3636 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:38:26.270067    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:38:26.270227    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:38:26.278127    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:38:26.297704    3636 start.go:296] duration metric: took 66.264765ms for postStartSetup
	I0717 10:38:26.297725    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.297894    3636 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:38:26.297906    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:26.297982    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:26.298095    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.298185    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:26.298259    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	I0717 10:38:26.332566    3636 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:38:26.332629    3636 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:38:26.364567    3636 fix.go:56] duration metric: took 13.463410955s for fixHost
	I0717 10:38:26.364593    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:26.364774    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:26.364878    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.364991    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.365075    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:26.365213    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:26.365360    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:26.365368    3636 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:38:26.420992    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237906.507932482
	
	I0717 10:38:26.421006    3636 fix.go:216] guest clock: 1721237906.507932482
	I0717 10:38:26.421017    3636 fix.go:229] Guest: 2024-07-17 10:38:26.507932482 -0700 PDT Remote: 2024-07-17 10:38:26.364583 -0700 PDT m=+65.237237021 (delta=143.349482ms)
	I0717 10:38:26.421032    3636 fix.go:200] guest clock delta is within tolerance: 143.349482ms
	I0717 10:38:26.421036    3636 start.go:83] releasing machines lock for "ha-572000-m03", held for 13.519917261s
	I0717 10:38:26.421054    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.421181    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetIP
	I0717 10:38:26.443010    3636 out.go:177] * Found network options:
	I0717 10:38:26.464409    3636 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0717 10:38:26.487460    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:38:26.487486    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:38:26.487503    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.488209    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.488434    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.488546    3636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:38:26.488583    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	W0717 10:38:26.488701    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:38:26.488736    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:38:26.488809    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:26.488843    3636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:38:26.488855    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:26.489040    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.489074    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:26.489211    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:26.489222    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.489320    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	I0717 10:38:26.489386    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:26.489533    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	W0717 10:38:26.520778    3636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:38:26.520842    3636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:38:26.572109    3636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:38:26.572138    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:38:26.572238    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:38:26.587958    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:38:26.596058    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:38:26.604066    3636 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:38:26.604116    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:38:26.612485    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:38:26.620942    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:38:26.629083    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:38:26.637275    3636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:38:26.645515    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:38:26.653717    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:38:26.662055    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:38:26.670484    3636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:38:26.677700    3636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:38:26.684962    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:26.781787    3636 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:38:26.802958    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:38:26.803029    3636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:38:26.827692    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:38:26.840860    3636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:38:26.869195    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:38:26.881705    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:38:26.892987    3636 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:38:26.911733    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:38:26.922817    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:38:26.938911    3636 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:38:26.941995    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:38:26.951587    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:38:26.965318    3636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:38:27.062809    3636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:38:27.181748    3636 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:38:27.181774    3636 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:38:27.195694    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:27.293396    3636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:38:29.632743    3636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.339268733s)
	I0717 10:38:29.632812    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:38:29.643610    3636 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 10:38:29.657480    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:38:29.668578    3636 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:38:29.772887    3636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:38:29.887343    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:29.983127    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:38:29.998340    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:38:30.010843    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:30.124553    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:38:30.193605    3636 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:38:30.193684    3636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:38:30.198773    3636 start.go:563] Will wait 60s for crictl version
	I0717 10:38:30.198857    3636 ssh_runner.go:195] Run: which crictl
	I0717 10:38:30.202846    3636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:38:30.233816    3636 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:38:30.233915    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:38:30.253337    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:38:30.311688    3636 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:38:30.384020    3636 out.go:177]   - env NO_PROXY=192.169.0.5
	I0717 10:38:30.444054    3636 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0717 10:38:30.480967    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetIP
	I0717 10:38:30.481248    3636 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:38:30.485047    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:38:30.495793    3636 mustload.go:65] Loading cluster: ha-572000
	I0717 10:38:30.495976    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:38:30.496198    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:38:30.496221    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:38:30.505198    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52009
	I0717 10:38:30.505558    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:38:30.505932    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:38:30.505942    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:38:30.506222    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:38:30.506342    3636 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:38:30.506437    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:38:30.506526    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3650
	I0717 10:38:30.507493    3636 host.go:66] Checking if "ha-572000" exists ...
	I0717 10:38:30.507764    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:38:30.507798    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:38:30.516606    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52011
	I0717 10:38:30.516943    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:38:30.517270    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:38:30.517281    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:38:30.517513    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:38:30.517630    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:38:30.517732    3636 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.7
	I0717 10:38:30.517737    3636 certs.go:194] generating shared ca certs ...
	I0717 10:38:30.517751    3636 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:38:30.517912    3636 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:38:30.517964    3636 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:38:30.517973    3636 certs.go:256] generating profile certs ...
	I0717 10:38:30.518074    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key
	I0717 10:38:30.518169    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.562e5459
	I0717 10:38:30.518222    3636 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key
	I0717 10:38:30.518229    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:38:30.518253    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:38:30.518273    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:38:30.518296    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:38:30.518321    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 10:38:30.518340    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 10:38:30.518358    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 10:38:30.518375    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 10:38:30.518476    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:38:30.518520    3636 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:38:30.518529    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:38:30.518566    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:38:30.518602    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:38:30.518634    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:38:30.518702    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:38:30.518736    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:38:30.518764    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:38:30.518783    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:38:30.518808    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:38:30.518899    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:38:30.518987    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:38:30.519076    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:38:30.519152    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:38:30.544343    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 10:38:30.547913    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 10:38:30.557636    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 10:38:30.561333    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0717 10:38:30.570252    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 10:38:30.573631    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 10:38:30.582360    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 10:38:30.585629    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0717 10:38:30.593318    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 10:38:30.596412    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 10:38:30.604690    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 10:38:30.607967    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 10:38:30.616462    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:38:30.638619    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:38:30.660075    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:38:30.679834    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:38:30.699712    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 10:38:30.720095    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 10:38:30.740379    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 10:38:30.760837    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 10:38:30.780662    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:38:30.800982    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:38:30.821007    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:38:30.841019    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 10:38:30.855040    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0717 10:38:30.868897    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 10:38:30.882296    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0717 10:38:30.895884    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 10:38:30.909514    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 10:38:30.923253    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 10:38:30.937006    3636 ssh_runner.go:195] Run: openssl version
	I0717 10:38:30.941436    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:38:30.950257    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:38:30.955139    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:38:30.955192    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:38:30.959572    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:38:30.968160    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:38:30.976579    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:38:30.980025    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:38:30.980067    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:38:30.984288    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:38:30.992609    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:38:31.001221    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:38:31.004796    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:38:31.004841    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:38:31.009065    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:38:31.017464    3636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:38:31.021030    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 10:38:31.025586    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 10:38:31.029983    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 10:38:31.034293    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 10:38:31.038625    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 10:38:31.042961    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 10:38:31.047275    3636 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.30.2 docker true true} ...
	I0717 10:38:31.047334    3636 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:38:31.047351    3636 kube-vip.go:115] generating kube-vip config ...
	I0717 10:38:31.047388    3636 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 10:38:31.059333    3636 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 10:38:31.059386    3636 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 10:38:31.059445    3636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:38:31.067249    3636 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:38:31.067300    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 10:38:31.075304    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0717 10:38:31.088747    3636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:38:31.102087    3636 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0717 10:38:31.115605    3636 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0717 10:38:31.118396    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:38:31.128499    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:31.224486    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:38:31.238639    3636 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:38:31.238848    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:38:31.259920    3636 out.go:177] * Verifying Kubernetes components...
	I0717 10:38:31.280661    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:31.399137    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:38:31.415018    3636 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:38:31.415346    3636 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa0a8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 10:38:31.415404    3636 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0717 10:38:31.415666    3636 node_ready.go:35] waiting up to 6m0s for node "ha-572000-m03" to be "Ready" ...
	I0717 10:38:31.415725    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:31.415732    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.415740    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.415745    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.421957    3636 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 10:38:31.422260    3636 node_ready.go:49] node "ha-572000-m03" has status "Ready":"True"
	I0717 10:38:31.422274    3636 node_ready.go:38] duration metric: took 6.596243ms for node "ha-572000-m03" to be "Ready" ...
	I0717 10:38:31.422281    3636 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:38:31.422331    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:31.422337    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.422343    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.422347    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.431073    3636 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 10:38:31.436681    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:31.436766    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:31.436772    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.436778    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.436782    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.440248    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:31.440722    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:31.440730    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.440735    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.440738    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.442939    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:31.937618    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:31.937636    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.937668    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.937673    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.940388    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:31.940820    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:31.940828    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.940834    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.940838    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.943159    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:32.437866    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:32.437879    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:32.437885    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:32.437888    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:32.446284    3636 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 10:38:32.446927    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:32.446936    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:32.446943    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:32.446948    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:32.452237    3636 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 10:38:32.937878    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:32.937890    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:32.937896    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:32.937901    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:32.940439    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:32.941049    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:32.941057    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:32.941064    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:32.941080    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:32.943760    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:33.437735    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:33.437751    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:33.437757    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:33.437760    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:33.440741    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:33.441277    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:33.441285    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:33.441291    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:33.441302    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:33.443897    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:33.444546    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:33.938758    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:33.938781    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:33.938787    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:33.938791    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:33.941068    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:33.941437    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:33.941445    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:33.941451    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:33.941462    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:33.943283    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:34.437334    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:34.437347    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:34.437354    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:34.437357    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:34.440066    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:34.440546    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:34.440554    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:34.440560    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:34.440563    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:34.442659    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:34.938574    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:34.938586    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:34.938593    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:34.938602    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:34.941243    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:34.941810    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:34.941818    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:34.941824    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:34.941827    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:34.943881    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:35.437928    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:35.437948    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:35.437959    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:35.437965    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:35.441416    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:35.441923    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:35.441931    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:35.441937    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:35.441941    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:35.443781    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:35.937111    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:35.937132    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:35.937144    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:35.937149    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:35.941097    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:35.941689    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:35.941702    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:35.941708    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:35.941711    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:35.943483    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:35.943912    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:36.437284    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:36.437298    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:36.437304    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:36.437308    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:36.439570    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:36.440110    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:36.440117    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:36.440127    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:36.440130    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:36.441781    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:36.938251    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:36.938279    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:36.938357    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:36.938372    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:36.941451    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:36.942095    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:36.942103    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:36.942109    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:36.942112    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:36.943809    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:37.438234    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:37.438246    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:37.438251    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:37.438256    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:37.440243    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:37.440651    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:37.440658    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:37.440664    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:37.440674    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:37.442390    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:37.938519    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:37.938538    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:37.938588    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:37.938592    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:37.940708    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:37.941242    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:37.941250    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:37.941256    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:37.941260    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:37.942969    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:38.437210    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:38.437229    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:38.437263    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:38.437275    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:38.440621    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:38.441113    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:38.441120    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:38.441126    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:38.441130    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:38.444813    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:38.445187    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:38.937338    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:38.937354    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:38.937363    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:38.937368    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:38.939598    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:38.940020    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:38.940027    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:38.940033    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:38.940038    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:38.941562    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:39.437538    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:39.437553    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:39.437563    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:39.437566    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:39.439993    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:39.440392    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:39.440400    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:39.440405    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:39.440408    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:39.442187    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:39.938827    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:39.938859    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:39.938867    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:39.938871    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:39.941007    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:39.941470    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:39.941477    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:39.941482    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:39.941486    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:39.943155    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:40.437526    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:40.437540    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:40.437546    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:40.437550    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:40.439587    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:40.440056    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:40.440063    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:40.440068    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:40.440072    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:40.441961    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:40.937672    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:40.937688    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:40.937697    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:40.937701    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:40.940217    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:40.940568    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:40.940576    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:40.940581    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:40.940585    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:40.942351    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:40.942718    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:41.437331    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:41.437344    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:41.437350    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:41.437354    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:41.439766    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:41.440280    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:41.440287    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:41.440293    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:41.440296    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:41.441965    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:41.938758    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:41.938778    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:41.938790    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:41.938798    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:41.941589    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:41.942137    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:41.942146    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:41.942152    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:41.942157    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:41.943723    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:42.438172    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:42.438185    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:42.438194    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:42.438198    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:42.440429    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:42.440980    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:42.440988    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:42.440994    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:42.440998    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:42.442893    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:42.938134    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:42.938172    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:42.938183    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:42.938191    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:42.940744    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:42.941114    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:42.941122    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:42.941127    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:42.941131    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:42.942787    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:42.943905    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:43.438163    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:43.438195    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:43.438217    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:43.438224    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:43.440858    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:43.441272    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:43.441279    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:43.441288    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:43.441292    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:43.443069    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:43.937578    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:43.937589    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:43.937596    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:43.937599    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:43.939582    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:43.940136    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:43.940144    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:43.940150    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:43.940152    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:43.941646    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:44.437231    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:44.437244    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:44.437250    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:44.437254    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:44.439651    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:44.440190    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:44.440197    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:44.440202    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:44.440206    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:44.442158    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:44.937185    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:44.937196    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:44.937203    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:44.937206    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:44.939361    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:44.939788    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:44.939796    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:44.939802    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:44.939805    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:44.941482    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:45.437377    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:45.437392    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:45.437401    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:45.437406    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:45.439768    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:45.440303    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:45.440311    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:45.440317    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:45.440320    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:45.441925    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:45.442312    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:45.939181    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:45.939236    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:45.939246    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:45.939253    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:45.941938    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:45.942549    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:45.942557    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:45.942563    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:45.942566    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:45.944281    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:46.437228    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:46.437238    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:46.437245    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:46.437248    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:46.439099    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:46.439744    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:46.439751    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:46.439757    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:46.439760    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:46.441200    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:46.938133    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:46.938186    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:46.938196    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:46.938202    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:46.940467    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:46.940876    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:46.940884    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:46.940890    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:46.940893    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:46.942527    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:47.437838    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:47.437850    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:47.437857    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:47.437861    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:47.440152    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:47.440651    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:47.440660    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:47.440665    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:47.440669    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:47.442745    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:47.443107    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:47.937851    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:47.937867    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:47.937873    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:47.937876    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:47.940047    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:47.940510    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:47.940517    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:47.940523    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:47.940530    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:47.942242    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:48.439255    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:48.439310    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:48.439329    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:48.439338    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:48.442468    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:48.443256    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:48.443264    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:48.443269    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:48.443272    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:48.444868    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:48.937733    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:48.937744    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:48.937750    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:48.937753    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:48.939731    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:48.940190    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:48.940198    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:48.940204    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:48.940207    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:48.941747    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:49.438149    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:49.438169    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:49.438181    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:49.438190    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:49.441135    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:49.441712    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:49.441721    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:49.441726    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:49.441738    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:49.443421    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:49.443800    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:49.937835    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:49.937887    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:49.937895    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:49.937905    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:49.940121    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:49.940667    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:49.940674    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:49.940680    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:49.940698    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:49.942630    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:50.438458    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:50.438469    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:50.438476    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:50.438483    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:50.440697    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:50.441412    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:50.441420    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:50.441426    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:50.441430    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:50.443161    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:50.937976    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:50.937995    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:50.938003    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:50.938009    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:50.940796    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:50.941307    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:50.941315    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:50.941320    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:50.941323    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:50.943029    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:51.437692    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:51.437705    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:51.437714    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:51.437720    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:51.440262    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:51.440918    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:51.440926    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:51.440932    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:51.440936    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:51.442631    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:51.937774    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:51.937792    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:51.937801    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:51.937807    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:51.940276    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:51.940668    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:51.940675    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:51.940681    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:51.940685    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:51.942296    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:51.942616    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:52.438854    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:52.438878    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:52.438892    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:52.438900    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:52.442008    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:52.442522    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:52.442530    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:52.442536    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:52.442540    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:52.444262    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:52.937664    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:52.937675    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:52.937684    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:52.937687    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:52.939825    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:52.940415    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:52.940422    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:52.940428    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:52.940432    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:52.942064    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:53.439277    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:53.439300    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:53.439309    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:53.439315    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:53.441705    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:53.442130    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:53.442138    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:53.442143    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:53.442146    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:53.443926    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:53.938741    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:53.938755    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:53.938785    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:53.938790    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:53.941015    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:53.941672    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:53.941680    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:53.941685    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:53.941689    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:53.943953    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:53.944413    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:54.438636    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:54.438654    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:54.438663    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:54.438668    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:54.441262    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:54.441677    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:54.441684    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:54.441690    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:54.441693    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:54.443309    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:54.938770    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:54.938788    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:54.938798    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:54.938802    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:54.941486    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:54.941877    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:54.941884    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:54.941890    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:54.941893    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:54.943590    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:55.438030    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:55.438049    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:55.438059    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:55.438064    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:55.440706    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:55.441272    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:55.441280    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:55.441289    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:55.441292    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:55.443295    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:55.938147    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:55.938203    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:55.938215    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:55.938222    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:55.940270    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:55.940729    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:55.940737    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:55.940742    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:55.940745    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:55.942359    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:56.437637    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:56.437654    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:56.437666    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:56.437671    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:56.440401    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:56.440900    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:56.440909    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:56.440916    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:56.440920    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:56.442737    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:56.443083    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:56.938496    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:56.938521    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:56.938533    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:56.938541    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:56.941967    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:56.942683    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:56.942691    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:56.942697    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:56.942707    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:56.944542    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:57.438317    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:57.438392    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:57.438405    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:57.438411    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:57.441323    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:57.441768    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:57.441776    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:57.441780    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:57.441793    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:57.443513    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:57.937977    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:57.937990    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:57.937996    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:57.938000    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:57.940155    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:57.940631    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:57.940639    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:57.940645    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:57.940650    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:57.942518    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:58.438589    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:58.438606    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:58.438612    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:58.438615    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:58.440808    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:58.441401    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:58.441409    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:58.441415    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:58.441423    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:58.443141    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:58.443478    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:58.938651    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:58.938670    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:58.938679    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:58.938683    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:58.940981    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:58.941414    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:58.941422    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:58.941428    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:58.941431    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:58.943207    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:59.437795    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:59.437809    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:59.437815    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:59.437819    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:59.440022    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:59.440439    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:59.440446    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:59.440452    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:59.440457    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:59.442209    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:59.938380    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:59.938393    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:59.938400    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:59.938403    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:59.940648    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:59.941030    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:59.941038    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:59.941044    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:59.941048    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:59.942631    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:00.437586    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:00.437607    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:00.437616    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:00.437621    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:00.440082    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:00.440574    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:00.440582    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:00.440588    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:00.440591    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:00.442224    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:00.939171    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:00.939189    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:00.939198    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:00.939203    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:00.941658    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:00.942057    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:00.942065    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:00.942071    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:00.942075    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:00.943872    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:00.944304    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:01.438420    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:01.438444    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:01.438462    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:01.438475    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:01.441885    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:01.442448    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:01.442456    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:01.442462    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:01.442473    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:01.444325    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:01.937741    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:01.937759    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:01.937769    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:01.937774    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:01.941004    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:01.941638    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:01.941645    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:01.941651    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:01.941655    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:01.943421    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:02.439464    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:02.439515    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:02.439539    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:02.439547    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:02.442788    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:02.443568    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:02.443575    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:02.443581    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:02.443584    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:02.445070    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:02.939355    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:02.939398    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:02.939423    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:02.939432    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:02.943288    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:02.943786    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:02.943793    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:02.943798    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:02.943808    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:02.945549    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:02.945918    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:03.437814    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:03.437833    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:03.437846    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:03.437852    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:03.440696    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:03.441473    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:03.441481    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:03.441487    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:03.441494    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:03.443180    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:03.938154    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:03.938171    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:03.938179    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:03.938185    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:03.940749    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:03.941323    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:03.941330    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:03.941336    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:03.941338    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:03.942986    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:04.438509    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:04.438533    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:04.438544    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:04.438552    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:04.441587    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:04.442338    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:04.442346    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:04.442351    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:04.442354    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:04.443865    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:04.939464    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:04.939517    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:04.939527    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:04.939530    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:04.941589    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:04.942132    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:04.942139    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:04.942144    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:04.942147    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:04.943787    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:05.437854    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:05.437866    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:05.437872    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:05.437875    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:05.439895    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:05.440295    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:05.440303    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:05.440308    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:05.440312    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:05.441766    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:05.442130    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:05.937813    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:05.937871    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:05.937882    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:05.937888    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:05.940367    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:05.940885    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:05.940892    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:05.940898    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:05.940902    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:05.942721    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:06.438966    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:06.438991    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:06.439007    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:06.439020    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:06.442137    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:06.442785    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:06.442793    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:06.442799    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:06.442802    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:06.444436    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:06.938695    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:06.938714    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:06.938723    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:06.938727    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:06.941327    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:06.941790    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:06.941798    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:06.941802    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:06.941805    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:06.943432    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:07.438469    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:07.438553    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:07.438567    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:07.438573    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:07.442157    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:07.442736    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:07.442744    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:07.442750    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:07.442754    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:07.444281    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:07.444696    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:07.937804    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:07.937815    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:07.937821    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:07.937823    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:07.939794    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:07.940418    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:07.940426    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:07.940432    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:07.940435    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:07.942179    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:08.437799    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:08.437814    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:08.437821    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:08.437827    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:08.440300    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:08.440760    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:08.440768    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:08.440773    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:08.440776    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:08.442402    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:08.938764    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:08.938789    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:08.938896    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:08.938909    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:08.942041    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:08.942737    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:08.942744    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:08.942751    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:08.942754    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:08.944691    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:09.437781    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:09.437795    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:09.437802    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:09.437807    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:09.440310    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:09.440716    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:09.440725    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:09.440731    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:09.440741    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:09.442571    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:09.937834    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:09.937847    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:09.937853    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:09.937856    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:09.939731    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:09.940144    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:09.940153    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:09.940159    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:09.940163    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:09.941982    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:09.942266    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:10.438403    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:10.438414    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:10.438421    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:10.438424    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:10.440749    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:10.441120    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:10.441127    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:10.441133    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:10.441138    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:10.442757    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:10.939169    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:10.939227    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:10.939238    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:10.939244    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:10.942004    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:10.942575    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:10.942582    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:10.942588    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:10.942591    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:10.944436    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:11.438251    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:11.438276    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:11.438353    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:11.438364    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:11.441421    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:11.441961    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:11.441969    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:11.441975    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:11.441979    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:11.446242    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:39:11.938022    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:11.938033    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:11.938040    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:11.938044    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:11.939924    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:11.940511    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:11.940519    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:11.940525    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:11.940528    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:11.942450    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:11.942833    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:12.439246    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:12.439269    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:12.439279    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:12.439285    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:12.442445    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:12.443020    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:12.443027    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:12.443033    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:12.443037    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:12.444778    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:12.939028    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:12.939059    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:12.939075    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:12.939144    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:12.941663    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:12.942169    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:12.942176    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:12.942182    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:12.942198    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:12.944174    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:13.439017    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:13.439030    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:13.439036    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:13.439039    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:13.441436    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:13.442003    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:13.442011    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:13.442017    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:13.442020    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:13.443715    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:13.939125    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:13.939138    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:13.939150    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:13.939154    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:13.941396    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:13.942124    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:13.942133    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:13.942138    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:13.942141    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:13.943860    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:13.944207    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:14.439525    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:14.439539    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:14.439545    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:14.439549    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:14.441636    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:14.442072    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:14.442080    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:14.442085    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:14.442088    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:14.443727    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:14.938392    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:14.938412    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:14.938425    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:14.938431    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:14.941839    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:14.942527    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:14.942535    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:14.942541    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:14.942556    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:14.944390    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:15.439124    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:15.439154    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:15.439236    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:15.439243    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:15.442572    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:15.443123    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:15.443133    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:15.443141    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:15.443145    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:15.445133    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:15.938789    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:15.938855    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:15.938870    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:15.938877    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:15.941774    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:15.942286    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:15.942294    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:15.942300    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:15.942304    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:15.944348    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:15.944660    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:16.439349    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:16.439368    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:16.439378    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:16.439383    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:16.441938    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:16.442524    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:16.442532    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:16.442537    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:16.442548    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:16.444186    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:16.938018    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:16.938067    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:16.938075    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:16.938081    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:16.940227    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:16.940771    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:16.940780    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:16.940785    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:16.940789    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:16.942609    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:17.438002    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:17.438028    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:17.438034    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:17.438038    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:17.440220    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:17.440724    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:17.440733    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:17.440739    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:17.440742    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:17.442604    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:17.938219    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:17.938237    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:17.938249    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:17.938255    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:17.941281    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:17.941690    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:17.941698    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:17.941703    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:17.941707    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:17.943715    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:18.439167    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:18.439186    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:18.439195    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:18.439200    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:18.441725    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:18.442096    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:18.442104    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:18.442109    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:18.442113    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:18.443738    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:18.444159    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:18.939393    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:18.939469    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:18.939479    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:18.939485    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:18.941987    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:18.942423    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:18.942431    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:18.942436    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:18.942439    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:18.944249    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:19.438795    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:19.438808    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.438814    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.438816    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.441023    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:19.441456    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:19.441464    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.441470    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.441475    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.443744    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:19.444095    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.444104    3636 pod_ready.go:81] duration metric: took 48.006189425s for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.444111    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.444150    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9dzd5
	I0717 10:39:19.444154    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.444160    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.444165    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.447092    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:19.447847    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:19.447856    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.447861    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.447865    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.449618    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:19.449899    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.449908    3636 pod_ready.go:81] duration metric: took 5.792129ms for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.449915    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.449950    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000
	I0717 10:39:19.449955    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.449961    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.449966    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.451887    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:19.452242    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:19.452249    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.452255    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.452259    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.455734    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:19.456038    3636 pod_ready.go:92] pod "etcd-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.456048    3636 pod_ready.go:81] duration metric: took 6.128452ms for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.456055    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.456091    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m02
	I0717 10:39:19.456096    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.456102    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.456104    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.459121    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:19.459474    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:19.459482    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.459487    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.459491    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.461049    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:19.461321    3636 pod_ready.go:92] pod "etcd-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.461330    3636 pod_ready.go:81] duration metric: took 5.269541ms for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.461337    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.461367    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m03
	I0717 10:39:19.461373    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.461378    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.461381    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.463280    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:19.463738    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:19.463745    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.463750    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.463754    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.466609    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:19.466864    3636 pod_ready.go:92] pod "etcd-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.466874    3636 pod_ready.go:81] duration metric: took 5.532002ms for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.466885    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.640514    3636 request.go:629] Waited for 173.589043ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:39:19.640593    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:39:19.640602    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.640610    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.640614    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.643241    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:19.839100    3636 request.go:629] Waited for 195.343311ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:19.839145    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:19.839152    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.839188    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.839194    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.845230    3636 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 10:39:19.845548    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.845558    3636 pod_ready.go:81] duration metric: took 378.657463ms for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.845565    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:20.040239    3636 request.go:629] Waited for 194.632219ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:39:20.040319    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:39:20.040328    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:20.040336    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:20.040342    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:20.042714    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:20.240297    3636 request.go:629] Waited for 196.995157ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:20.240378    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:20.240384    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:20.240390    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:20.240396    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:20.242369    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:20.242695    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:20.242704    3636 pod_ready.go:81] duration metric: took 397.124019ms for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:20.242711    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:20.439359    3636 request.go:629] Waited for 196.544114ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:39:20.439408    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:39:20.439416    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:20.439427    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:20.439434    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:20.442435    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:20.638955    3636 request.go:629] Waited for 196.048572ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:20.639046    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:20.639056    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:20.639068    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:20.639075    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:20.642008    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:20.642430    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:20.642442    3636 pod_ready.go:81] duration metric: took 399.714561ms for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:20.642451    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:20.838986    3636 request.go:629] Waited for 196.455933ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:39:20.839106    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:39:20.839119    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:20.839131    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:20.839141    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:20.842621    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:21.039118    3636 request.go:629] Waited for 195.900542ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:21.039165    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:21.039176    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:21.039188    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:21.039196    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:21.042149    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:21.042711    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:21.042741    3636 pod_ready.go:81] duration metric: took 400.268935ms for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:21.042748    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:21.238981    3636 request.go:629] Waited for 196.178207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:39:21.239040    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:39:21.239051    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:21.239063    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:21.239071    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:21.242170    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:21.440519    3636 request.go:629] Waited for 197.63517ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:21.440569    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:21.440581    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:21.440597    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:21.440606    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:21.443784    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:21.444203    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:21.444212    3636 pod_ready.go:81] duration metric: took 401.448672ms for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:21.444219    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:21.640166    3636 request.go:629] Waited for 195.890355ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:39:21.640224    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:39:21.640235    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:21.640246    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:21.640254    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:21.643178    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:21.840025    3636 request.go:629] Waited for 196.38625ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:21.840077    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:21.840087    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:21.840099    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:21.840107    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:21.842881    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:21.843340    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:21.843349    3636 pod_ready.go:81] duration metric: took 399.115148ms for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:21.843356    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:22.038929    3636 request.go:629] Waited for 195.527396ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:39:22.038980    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:39:22.038991    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:22.039000    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:22.039006    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:22.041797    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:22.239447    3636 request.go:629] Waited for 196.85315ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:39:22.239496    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:39:22.239504    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:22.239515    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:22.239525    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:22.242443    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:22.242932    3636 pod_ready.go:97] node "ha-572000-m04" hosting pod "kube-proxy-5wcph" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-572000-m04" has status "Ready":"Unknown"
	I0717 10:39:22.242948    3636 pod_ready.go:81] duration metric: took 399.575996ms for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	E0717 10:39:22.242956    3636 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-572000-m04" hosting pod "kube-proxy-5wcph" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-572000-m04" has status "Ready":"Unknown"
	I0717 10:39:22.242964    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:22.439269    3636 request.go:629] Waited for 196.255356ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:39:22.439393    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:39:22.439403    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:22.439414    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:22.439420    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:22.442456    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:22.640394    3636 request.go:629] Waited for 197.266214ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:22.640491    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:22.640500    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:22.640509    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:22.640514    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:22.643031    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:22.643471    3636 pod_ready.go:92] pod "kube-proxy-h7k9z" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:22.643480    3636 pod_ready.go:81] duration metric: took 400.50076ms for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:22.643487    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:22.839377    3636 request.go:629] Waited for 195.844443ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:39:22.839468    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:39:22.839477    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:22.839485    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:22.839491    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:22.841921    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:23.039004    3636 request.go:629] Waited for 196.604394ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:23.039109    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:23.039120    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:23.039131    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:23.039138    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:23.042022    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:23.042449    3636 pod_ready.go:92] pod "kube-proxy-hst7h" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:23.042462    3636 pod_ready.go:81] duration metric: took 398.959822ms for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:23.042480    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:23.240001    3636 request.go:629] Waited for 197.469314ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:39:23.240093    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:39:23.240110    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:23.240121    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:23.240131    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:23.243284    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:23.439300    3636 request.go:629] Waited for 195.300943ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:23.439332    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:23.439336    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:23.439343    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:23.439370    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:23.441287    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:23.441722    3636 pod_ready.go:92] pod "kube-proxy-v6jxh" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:23.441732    3636 pod_ready.go:81] duration metric: took 399.23495ms for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:23.441739    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:23.638943    3636 request.go:629] Waited for 197.165268ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:39:23.639000    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:39:23.639006    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:23.639012    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:23.639017    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:23.641044    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:23.840535    3636 request.go:629] Waited for 199.126882ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:23.840627    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:23.840639    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:23.840679    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:23.840691    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:23.843464    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:23.843963    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:23.843976    3636 pod_ready.go:81] duration metric: took 402.220047ms for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:23.843984    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:24.039540    3636 request.go:629] Waited for 195.50331ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:39:24.039598    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:39:24.039670    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.039685    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.039691    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.042477    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:24.239459    3636 request.go:629] Waited for 196.457492ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:24.239561    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:24.239573    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.239585    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.239591    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.242659    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:24.243312    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:24.243327    3636 pod_ready.go:81] duration metric: took 399.325407ms for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:24.243336    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:24.439080    3636 request.go:629] Waited for 195.673891ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:39:24.439191    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:39:24.439202    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.439213    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.439223    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.443262    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:39:24.639182    3636 request.go:629] Waited for 195.517919ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:24.639292    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:24.639303    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.639316    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.639324    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.642200    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:24.642657    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:24.642666    3636 pod_ready.go:81] duration metric: took 399.31371ms for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:24.642674    3636 pod_ready.go:38] duration metric: took 53.219035328s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:39:24.642686    3636 api_server.go:52] waiting for apiserver process to appear ...
	I0717 10:39:24.642749    3636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:39:24.655291    3636 api_server.go:72] duration metric: took 53.415271815s to wait for apiserver process to appear ...
	I0717 10:39:24.655303    3636 api_server.go:88] waiting for apiserver healthz status ...
	I0717 10:39:24.655313    3636 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0717 10:39:24.659504    3636 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0717 10:39:24.659539    3636 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0717 10:39:24.659544    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.659549    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.659552    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.660035    3636 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:39:24.660129    3636 api_server.go:141] control plane version: v1.30.2
	I0717 10:39:24.660138    3636 api_server.go:131] duration metric: took 4.830633ms to wait for apiserver health ...
	I0717 10:39:24.660142    3636 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 10:39:24.840282    3636 request.go:629] Waited for 180.099076ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:39:24.840353    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:39:24.840361    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.840369    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.840373    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.845121    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:39:24.850038    3636 system_pods.go:59] 26 kube-system pods found
	I0717 10:39:24.850051    3636 system_pods.go:61] "coredns-7db6d8ff4d-2phrp" [e76a91cf-8a14-454c-baee-7c0128645e0e] Running
	I0717 10:39:24.850054    3636 system_pods.go:61] "coredns-7db6d8ff4d-9dzd5" [f9a7cff1-56a3-4600-ae2f-a951dda10753] Running
	I0717 10:39:24.850057    3636 system_pods.go:61] "etcd-ha-572000" [2d7b717e-d404-4c63-afe9-799de6711964] Running
	I0717 10:39:24.850060    3636 system_pods.go:61] "etcd-ha-572000-m02" [8abc7662-c159-4953-9aba-11a75b4e7d65] Running
	I0717 10:39:24.850062    3636 system_pods.go:61] "etcd-ha-572000-m03" [78629908-d362-4a27-933f-2b929867c22f] Running
	I0717 10:39:24.850065    3636 system_pods.go:61] "kindnet-5xsrp" [0b84aa4d-7452-47f6-8052-f063e2e8ef7b] Running
	I0717 10:39:24.850067    3636 system_pods.go:61] "kindnet-72zfp" [0cc735c4-08b3-499a-b9e2-8c38377f371b] Running
	I0717 10:39:24.850069    3636 system_pods.go:61] "kindnet-g2m92" [be3d84cf-f9e8-426c-b02a-49eb7eed9d6a] Running
	I0717 10:39:24.850071    3636 system_pods.go:61] "kindnet-t85bv" [8649cc70-8ce8-4caf-938e-bd253fa5b7ae] Running
	I0717 10:39:24.850074    3636 system_pods.go:61] "kube-apiserver-ha-572000" [6409591d-7a50-414b-be63-44d5ddb0b0e0] Running
	I0717 10:39:24.850076    3636 system_pods.go:61] "kube-apiserver-ha-572000-m02" [d658459c-67f9-4798-87f2-c651d830af35] Running
	I0717 10:39:24.850078    3636 system_pods.go:61] "kube-apiserver-ha-572000-m03" [ac1c02f1-f023-4ad2-99bc-2657fb9d50d7] Running
	I0717 10:39:24.850081    3636 system_pods.go:61] "kube-controller-manager-ha-572000" [075c4d23-04bd-404a-822d-ae7d326a68ac] Running
	I0717 10:39:24.850084    3636 system_pods.go:61] "kube-controller-manager-ha-572000-m02" [3014bdb3-bb86-48d5-a045-2a8dab026bd8] Running
	I0717 10:39:24.850086    3636 system_pods.go:61] "kube-controller-manager-ha-572000-m03" [3b76e220-3a5b-4dac-8f69-6b380799d2ef] Running
	I0717 10:39:24.850088    3636 system_pods.go:61] "kube-proxy-5wcph" [731f5b57-131e-4e97-b47a-036b8d4edbcd] Running
	I0717 10:39:24.850105    3636 system_pods.go:61] "kube-proxy-h7k9z" [cf687084-b40d-43b6-9476-8c60c5f37d1d] Running
	I0717 10:39:24.850110    3636 system_pods.go:61] "kube-proxy-hst7h" [2b7c8d2c-3d71-4357-9429-1d8438c446f5] Running
	I0717 10:39:24.850113    3636 system_pods.go:61] "kube-proxy-v6jxh" [3f952fc8-747f-49da-b400-5212dba538d8] Running
	I0717 10:39:24.850116    3636 system_pods.go:61] "kube-scheduler-ha-572000" [da36a422-ba49-4cd6-8f47-9be7c43657be] Running
	I0717 10:39:24.850118    3636 system_pods.go:61] "kube-scheduler-ha-572000-m02" [13e289a2-8d3c-478f-9220-1d114dd5bf62] Running
	I0717 10:39:24.850121    3636 system_pods.go:61] "kube-scheduler-ha-572000-m03" [428716db-8096-4b57-9f90-e706d5683852] Running
	I0717 10:39:24.850124    3636 system_pods.go:61] "kube-vip-ha-572000" [57c4f54d-e859-4429-90dd-0402a9e73727] Running
	I0717 10:39:24.850127    3636 system_pods.go:61] "kube-vip-ha-572000-m02" [fa57b8d2-bc41-42af-9d9a-1b68f882e6fe] Running
	I0717 10:39:24.850129    3636 system_pods.go:61] "kube-vip-ha-572000-m03" [a35c5007-dfe3-4b58-b16c-d568b8f9c1c2] Running
	I0717 10:39:24.850133    3636 system_pods.go:61] "storage-provisioner" [1801216d-6529-4049-b874-1577132fc03f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 10:39:24.850139    3636 system_pods.go:74] duration metric: took 189.987862ms to wait for pod list to return data ...
	I0717 10:39:24.850145    3636 default_sa.go:34] waiting for default service account to be created ...
	I0717 10:39:25.040731    3636 request.go:629] Waited for 190.528349ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0717 10:39:25.040830    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0717 10:39:25.040841    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:25.040852    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:25.040860    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:25.044018    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:25.044088    3636 default_sa.go:45] found service account: "default"
	I0717 10:39:25.044097    3636 default_sa.go:55] duration metric: took 193.941803ms for default service account to be created ...
	I0717 10:39:25.044103    3636 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 10:39:25.240503    3636 request.go:629] Waited for 196.351718ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:39:25.240543    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:39:25.240548    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:25.240554    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:25.240583    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:25.244975    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:39:25.249908    3636 system_pods.go:86] 26 kube-system pods found
	I0717 10:39:25.249919    3636 system_pods.go:89] "coredns-7db6d8ff4d-2phrp" [e76a91cf-8a14-454c-baee-7c0128645e0e] Running
	I0717 10:39:25.249923    3636 system_pods.go:89] "coredns-7db6d8ff4d-9dzd5" [f9a7cff1-56a3-4600-ae2f-a951dda10753] Running
	I0717 10:39:25.249940    3636 system_pods.go:89] "etcd-ha-572000" [2d7b717e-d404-4c63-afe9-799de6711964] Running
	I0717 10:39:25.249944    3636 system_pods.go:89] "etcd-ha-572000-m02" [8abc7662-c159-4953-9aba-11a75b4e7d65] Running
	I0717 10:39:25.249948    3636 system_pods.go:89] "etcd-ha-572000-m03" [78629908-d362-4a27-933f-2b929867c22f] Running
	I0717 10:39:25.249951    3636 system_pods.go:89] "kindnet-5xsrp" [0b84aa4d-7452-47f6-8052-f063e2e8ef7b] Running
	I0717 10:39:25.249955    3636 system_pods.go:89] "kindnet-72zfp" [0cc735c4-08b3-499a-b9e2-8c38377f371b] Running
	I0717 10:39:25.249959    3636 system_pods.go:89] "kindnet-g2m92" [be3d84cf-f9e8-426c-b02a-49eb7eed9d6a] Running
	I0717 10:39:25.249962    3636 system_pods.go:89] "kindnet-t85bv" [8649cc70-8ce8-4caf-938e-bd253fa5b7ae] Running
	I0717 10:39:25.249966    3636 system_pods.go:89] "kube-apiserver-ha-572000" [6409591d-7a50-414b-be63-44d5ddb0b0e0] Running
	I0717 10:39:25.249969    3636 system_pods.go:89] "kube-apiserver-ha-572000-m02" [d658459c-67f9-4798-87f2-c651d830af35] Running
	I0717 10:39:25.249973    3636 system_pods.go:89] "kube-apiserver-ha-572000-m03" [ac1c02f1-f023-4ad2-99bc-2657fb9d50d7] Running
	I0717 10:39:25.249976    3636 system_pods.go:89] "kube-controller-manager-ha-572000" [075c4d23-04bd-404a-822d-ae7d326a68ac] Running
	I0717 10:39:25.249979    3636 system_pods.go:89] "kube-controller-manager-ha-572000-m02" [3014bdb3-bb86-48d5-a045-2a8dab026bd8] Running
	I0717 10:39:25.249983    3636 system_pods.go:89] "kube-controller-manager-ha-572000-m03" [3b76e220-3a5b-4dac-8f69-6b380799d2ef] Running
	I0717 10:39:25.249987    3636 system_pods.go:89] "kube-proxy-5wcph" [731f5b57-131e-4e97-b47a-036b8d4edbcd] Running
	I0717 10:39:25.249990    3636 system_pods.go:89] "kube-proxy-h7k9z" [cf687084-b40d-43b6-9476-8c60c5f37d1d] Running
	I0717 10:39:25.249994    3636 system_pods.go:89] "kube-proxy-hst7h" [2b7c8d2c-3d71-4357-9429-1d8438c446f5] Running
	I0717 10:39:25.249997    3636 system_pods.go:89] "kube-proxy-v6jxh" [3f952fc8-747f-49da-b400-5212dba538d8] Running
	I0717 10:39:25.250001    3636 system_pods.go:89] "kube-scheduler-ha-572000" [da36a422-ba49-4cd6-8f47-9be7c43657be] Running
	I0717 10:39:25.250005    3636 system_pods.go:89] "kube-scheduler-ha-572000-m02" [13e289a2-8d3c-478f-9220-1d114dd5bf62] Running
	I0717 10:39:25.250008    3636 system_pods.go:89] "kube-scheduler-ha-572000-m03" [428716db-8096-4b57-9f90-e706d5683852] Running
	I0717 10:39:25.250012    3636 system_pods.go:89] "kube-vip-ha-572000" [57c4f54d-e859-4429-90dd-0402a9e73727] Running
	I0717 10:39:25.250019    3636 system_pods.go:89] "kube-vip-ha-572000-m02" [fa57b8d2-bc41-42af-9d9a-1b68f882e6fe] Running
	I0717 10:39:25.250026    3636 system_pods.go:89] "kube-vip-ha-572000-m03" [a35c5007-dfe3-4b58-b16c-d568b8f9c1c2] Running
	I0717 10:39:25.250031    3636 system_pods.go:89] "storage-provisioner" [1801216d-6529-4049-b874-1577132fc03f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 10:39:25.250037    3636 system_pods.go:126] duration metric: took 205.924043ms to wait for k8s-apps to be running ...
	I0717 10:39:25.250043    3636 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 10:39:25.250097    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:39:25.260730    3636 system_svc.go:56] duration metric: took 10.680441ms WaitForService to wait for kubelet
	I0717 10:39:25.260752    3636 kubeadm.go:582] duration metric: took 54.020711767s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:39:25.260767    3636 node_conditions.go:102] verifying NodePressure condition ...
	I0717 10:39:25.440260    3636 request.go:629] Waited for 179.444294ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0717 10:39:25.440305    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0717 10:39:25.440313    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:25.440326    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:25.440335    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:25.443664    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:25.444820    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:39:25.444830    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:39:25.444839    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:39:25.444842    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:39:25.444845    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:39:25.444848    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:39:25.444851    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:39:25.444854    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:39:25.444857    3636 node_conditions.go:105] duration metric: took 184.081224ms to run NodePressure ...
	I0717 10:39:25.444866    3636 start.go:241] waiting for startup goroutines ...
	I0717 10:39:25.444881    3636 start.go:255] writing updated cluster config ...
	I0717 10:39:25.466841    3636 out.go:177] 
	I0717 10:39:25.488444    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:39:25.488557    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:39:25.511165    3636 out.go:177] * Starting "ha-572000-m04" worker node in "ha-572000" cluster
	I0717 10:39:25.553049    3636 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:39:25.553078    3636 cache.go:56] Caching tarball of preloaded images
	I0717 10:39:25.553293    3636 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:39:25.553311    3636 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:39:25.553441    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:39:25.554263    3636 start.go:360] acquireMachinesLock for ha-572000-m04: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:39:25.554357    3636 start.go:364] duration metric: took 71.034µs to acquireMachinesLock for "ha-572000-m04"
	I0717 10:39:25.554380    3636 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:39:25.554388    3636 fix.go:54] fixHost starting: m04
	I0717 10:39:25.554780    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:39:25.554805    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:39:25.564043    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52015
	I0717 10:39:25.564385    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:39:25.564752    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:39:25.564769    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:39:25.564963    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:39:25.565075    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:39:25.565158    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetState
	I0717 10:39:25.565257    3636 main.go:141] libmachine: (ha-572000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:39:25.565368    3636 main.go:141] libmachine: (ha-572000-m04) DBG | hyperkit pid from json: 3096
	I0717 10:39:25.566303    3636 main.go:141] libmachine: (ha-572000-m04) DBG | hyperkit pid 3096 missing from process table
	I0717 10:39:25.566325    3636 fix.go:112] recreateIfNeeded on ha-572000-m04: state=Stopped err=<nil>
	I0717 10:39:25.566334    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	W0717 10:39:25.566413    3636 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:39:25.587318    3636 out.go:177] * Restarting existing hyperkit VM for "ha-572000-m04" ...
	I0717 10:39:25.629121    3636 main.go:141] libmachine: (ha-572000-m04) Calling .Start
	I0717 10:39:25.629280    3636 main.go:141] libmachine: (ha-572000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:39:25.629323    3636 main.go:141] libmachine: (ha-572000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/hyperkit.pid
	I0717 10:39:25.629373    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Using UUID d62b35de-5f9d-4091-a1f9-ae55052b3d93
	I0717 10:39:25.659758    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Generated MAC 1e:37:45:6a:f1:7f
	I0717 10:39:25.659780    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:39:25.659921    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d62b35de-5f9d-4091-a1f9-ae55052b3d93", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002879b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:39:25.659979    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d62b35de-5f9d-4091-a1f9-ae55052b3d93", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002879b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:39:25.660027    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "d62b35de-5f9d-4091-a1f9-ae55052b3d93", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/ha-572000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machine
s/ha-572000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:39:25.660072    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U d62b35de-5f9d-4091-a1f9-ae55052b3d93 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/ha-572000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:39:25.660086    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:39:25.661465    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: Pid is 3683
	I0717 10:39:25.661986    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Attempt 0
	I0717 10:39:25.661995    3636 main.go:141] libmachine: (ha-572000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:39:25.662068    3636 main.go:141] libmachine: (ha-572000-m04) DBG | hyperkit pid from json: 3683
	I0717 10:39:25.664876    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Searching for 1e:37:45:6a:f1:7f in /var/db/dhcpd_leases ...
	I0717 10:39:25.665000    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:39:25.665028    3636 main.go:141] libmachine: (ha-572000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:d3:62:da:43:cf ID:1,6e:d3:62:da:43:cf Lease:0x6699530d}
	I0717 10:39:25.665090    3636 main.go:141] libmachine: (ha-572000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x669952ec}
	I0717 10:39:25.665098    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetConfigRaw
	I0717 10:39:25.665107    3636 main.go:141] libmachine: (ha-572000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669952da}
	I0717 10:39:25.665121    3636 main.go:141] libmachine: (ha-572000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:37:45:6a:f1:7f ID:1,1e:37:45:6a:f1:7f Lease:0x6698001b}
	I0717 10:39:25.665133    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Found match: 1e:37:45:6a:f1:7f
	I0717 10:39:25.665155    3636 main.go:141] libmachine: (ha-572000-m04) DBG | IP: 192.169.0.8
	I0717 10:39:25.665871    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetIP
	I0717 10:39:25.666075    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:39:25.666480    3636 machine.go:94] provisionDockerMachine start ...
	I0717 10:39:25.666492    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:39:25.666622    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:39:25.666758    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:39:25.666855    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:39:25.666997    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:39:25.667100    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:39:25.667218    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:39:25.667397    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:39:25.667404    3636 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:39:25.669640    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:39:25.678044    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:39:25.679048    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:39:25.679102    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:39:25.679117    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:39:25.679129    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:39:26.061153    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:39:26.061169    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:39:26.176025    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:39:26.176085    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:39:26.176109    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:39:26.176141    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:39:26.176817    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:39:26.176827    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:39:31.459017    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:31 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:39:31.459116    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:31 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:39:31.459128    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:31 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:39:31.482911    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:31 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:40:00.729304    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:40:00.729320    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetMachineName
	I0717 10:40:00.729447    3636 buildroot.go:166] provisioning hostname "ha-572000-m04"
	I0717 10:40:00.729459    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetMachineName
	I0717 10:40:00.729548    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:00.729650    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:00.729752    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:00.729829    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:00.729922    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:00.730060    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:00.730229    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:00.730238    3636 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000-m04 && echo "ha-572000-m04" | sudo tee /etc/hostname
	I0717 10:40:00.792250    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000-m04
	
	I0717 10:40:00.792267    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:00.792395    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:00.792496    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:00.792601    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:00.792686    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:00.792813    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:00.792953    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:00.792965    3636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:40:00.851570    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:40:00.851592    3636 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:40:00.851608    3636 buildroot.go:174] setting up certificates
	I0717 10:40:00.851614    3636 provision.go:84] configureAuth start
	I0717 10:40:00.851621    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetMachineName
	I0717 10:40:00.851754    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetIP
	I0717 10:40:00.851843    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:00.851935    3636 provision.go:143] copyHostCerts
	I0717 10:40:00.851965    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:40:00.852026    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:40:00.852032    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:40:00.852183    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:40:00.852421    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:40:00.852465    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:40:00.852470    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:40:00.852549    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:40:00.852695    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:40:00.852734    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:40:00.852739    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:40:00.852814    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:40:00.852963    3636 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000-m04 san=[127.0.0.1 192.169.0.8 ha-572000-m04 localhost minikube]
	I0717 10:40:01.012731    3636 provision.go:177] copyRemoteCerts
	I0717 10:40:01.012781    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:40:01.012796    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:01.012945    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:01.013036    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.013118    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:01.013205    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	I0717 10:40:01.045440    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:40:01.045513    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 10:40:01.065877    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:40:01.065952    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:40:01.086341    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:40:01.086417    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:40:01.107237    3636 provision.go:87] duration metric: took 255.607467ms to configureAuth
	I0717 10:40:01.107252    3636 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:40:01.107441    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:40:01.107470    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:01.107602    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:01.107691    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:01.107775    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.107862    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.107936    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:01.108052    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:01.108176    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:01.108184    3636 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:40:01.159812    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:40:01.159826    3636 buildroot.go:70] root file system type: tmpfs
	I0717 10:40:01.159906    3636 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:40:01.159918    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:01.160045    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:01.160133    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.160218    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.160312    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:01.160436    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:01.160588    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:01.160638    3636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:40:01.222986    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:40:01.223013    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:01.223158    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:01.223263    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.223339    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.223425    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:01.223557    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:01.223705    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:01.223717    3636 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:40:02.793231    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:40:02.793247    3636 machine.go:97] duration metric: took 37.125816173s to provisionDockerMachine
	I0717 10:40:02.793256    3636 start.go:293] postStartSetup for "ha-572000-m04" (driver="hyperkit")
	I0717 10:40:02.793263    3636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:40:02.793273    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:02.793461    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:40:02.793475    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:02.793570    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:02.793662    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:02.793746    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:02.793821    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	I0717 10:40:02.826174    3636 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:40:02.829517    3636 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:40:02.829527    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:40:02.829627    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:40:02.829814    3636 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:40:02.829820    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:40:02.830025    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:40:02.837723    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:40:02.858109    3636 start.go:296] duration metric: took 64.843134ms for postStartSetup
	I0717 10:40:02.858164    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:02.858343    3636 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:40:02.858357    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:02.858452    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:02.858535    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:02.858625    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:02.858709    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	I0717 10:40:02.891466    3636 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:40:02.891526    3636 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:40:02.924508    3636 fix.go:56] duration metric: took 37.369170253s for fixHost
	I0717 10:40:02.924533    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:02.924664    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:02.924753    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:02.924844    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:02.924927    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:02.925043    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:02.925181    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:02.925189    3636 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:40:02.979156    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721238002.907586801
	
	I0717 10:40:02.979168    3636 fix.go:216] guest clock: 1721238002.907586801
	I0717 10:40:02.979174    3636 fix.go:229] Guest: 2024-07-17 10:40:02.907586801 -0700 PDT Remote: 2024-07-17 10:40:02.924523 -0700 PDT m=+161.794729692 (delta=-16.936199ms)
	I0717 10:40:02.979185    3636 fix.go:200] guest clock delta is within tolerance: -16.936199ms
	I0717 10:40:02.979189    3636 start.go:83] releasing machines lock for "ha-572000-m04", held for 37.423872596s
	I0717 10:40:02.979207    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:02.979341    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetIP
	I0717 10:40:03.002677    3636 out.go:177] * Found network options:
	I0717 10:40:03.023433    3636 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0717 10:40:03.044600    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:40:03.044630    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:40:03.044645    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:40:03.044662    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:03.045380    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:03.045584    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:03.045691    3636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:40:03.045739    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	W0717 10:40:03.045803    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:40:03.045829    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:40:03.045847    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:40:03.045916    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:03.045932    3636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:40:03.045950    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:03.046116    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:03.046197    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:03.046277    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:03.046336    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:03.046416    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	I0717 10:40:03.046472    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:03.046583    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	W0717 10:40:03.078338    3636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:40:03.078404    3636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:40:03.127460    3636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:40:03.127478    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:40:03.127562    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:40:03.143174    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:40:03.152039    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:40:03.160575    3636 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:40:03.160636    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:40:03.169267    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:40:03.178061    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:40:03.186799    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:40:03.195713    3636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:40:03.205361    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:40:03.214887    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:40:03.223632    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:40:03.232306    3636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:40:03.240303    3636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:40:03.248146    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:03.349118    3636 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:40:03.368632    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:40:03.368697    3636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:40:03.382935    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:40:03.394904    3636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:40:03.408677    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:40:03.424538    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:40:03.436679    3636 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:40:03.457267    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:40:03.468621    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:40:03.484458    3636 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:40:03.487477    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:40:03.495866    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:40:03.509467    3636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:40:03.610005    3636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:40:03.711300    3636 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:40:03.711330    3636 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:40:03.725314    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:03.818685    3636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:40:06.069148    3636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.250387117s)
	I0717 10:40:06.069225    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:40:06.080064    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:40:06.090634    3636 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:40:06.182522    3636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:40:06.285041    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:06.397211    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:40:06.410586    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:40:06.421941    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:06.525211    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:40:06.593566    3636 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:40:06.593658    3636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:40:06.598237    3636 start.go:563] Will wait 60s for crictl version
	I0717 10:40:06.598298    3636 ssh_runner.go:195] Run: which crictl
	I0717 10:40:06.601369    3636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:40:06.630287    3636 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:40:06.630357    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:40:06.648217    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:40:06.713331    3636 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:40:06.734501    3636 out.go:177]   - env NO_PROXY=192.169.0.5
	I0717 10:40:06.755443    3636 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0717 10:40:06.776545    3636 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	I0717 10:40:06.797619    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetIP
	I0717 10:40:06.797849    3636 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:40:06.801369    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:40:06.811681    3636 mustload.go:65] Loading cluster: ha-572000
	I0717 10:40:06.811867    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:40:06.812096    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:40:06.812120    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:40:06.821106    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52038
	I0717 10:40:06.821460    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:40:06.821823    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:40:06.821839    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:40:06.822045    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:40:06.822158    3636 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:40:06.822237    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:40:06.822325    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3650
	I0717 10:40:06.823304    3636 host.go:66] Checking if "ha-572000" exists ...
	I0717 10:40:06.823558    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:40:06.823583    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:40:06.832052    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52040
	I0717 10:40:06.832422    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:40:06.832722    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:40:06.832733    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:40:06.832924    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:40:06.833068    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:40:06.833173    3636 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.8
	I0717 10:40:06.833178    3636 certs.go:194] generating shared ca certs ...
	I0717 10:40:06.833187    3636 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:40:06.833369    3636 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:40:06.833445    3636 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:40:06.833455    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:40:06.833477    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:40:06.833496    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:40:06.833513    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:40:06.833602    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:40:06.833654    3636 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:40:06.833664    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:40:06.833699    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:40:06.833731    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:40:06.833765    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:40:06.833830    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:40:06.833866    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:40:06.833895    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:40:06.833914    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:40:06.833943    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:40:06.854528    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:40:06.874473    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:40:06.894419    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:40:06.914655    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:40:06.934481    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:40:06.953938    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:40:06.973423    3636 ssh_runner.go:195] Run: openssl version
	I0717 10:40:06.977846    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:40:06.987226    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:40:06.990594    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:40:06.990633    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:40:06.994910    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:40:07.004316    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:40:07.013700    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:40:07.017207    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:40:07.017252    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:40:07.021661    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:40:07.030891    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:40:07.040013    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:40:07.043424    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:40:07.043460    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:40:07.048023    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:40:07.057292    3636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:40:07.060465    3636 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 10:40:07.060498    3636 kubeadm.go:934] updating node {m04 192.169.0.8 0 v1.30.2 docker false true} ...
	I0717 10:40:07.060568    3636 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:40:07.060612    3636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:40:07.068828    3636 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:40:07.068888    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0717 10:40:07.077989    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0717 10:40:07.091753    3636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:40:07.105613    3636 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0717 10:40:07.108527    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:40:07.118827    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:07.218618    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:40:07.232580    3636 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0717 10:40:07.232780    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:40:07.270354    3636 out.go:177] * Verifying Kubernetes components...
	I0717 10:40:07.343786    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:07.486955    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:40:07.502599    3636 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:40:07.502930    3636 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa0a8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 10:40:07.502990    3636 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0717 10:40:07.503236    3636 node_ready.go:35] waiting up to 6m0s for node "ha-572000-m04" to be "Ready" ...
	I0717 10:40:07.503290    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:40:07.503296    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.503303    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.503305    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.507147    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:07.507598    3636 node_ready.go:49] node "ha-572000-m04" has status "Ready":"True"
	I0717 10:40:07.507619    3636 node_ready.go:38] duration metric: took 4.370479ms for node "ha-572000-m04" to be "Ready" ...
	I0717 10:40:07.507631    3636 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:40:07.507695    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:40:07.507705    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.507714    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.507718    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.517761    3636 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0717 10:40:07.525740    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.525796    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:40:07.525804    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.525810    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.525815    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.527956    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:07.528370    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:07.528378    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.528384    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.528387    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.530521    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:07.530888    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:07.530899    3636 pod_ready.go:81] duration metric: took 5.142557ms for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.530907    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.530969    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9dzd5
	I0717 10:40:07.530978    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.530985    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.530990    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.533172    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:07.533578    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:07.533586    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.533592    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.533595    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.535152    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.535453    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:07.535462    3636 pod_ready.go:81] duration metric: took 4.549454ms for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.535469    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.535504    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000
	I0717 10:40:07.535509    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.535515    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.535519    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.537042    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.537410    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:07.537417    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.537423    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.537426    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.538975    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.539323    3636 pod_ready.go:92] pod "etcd-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:07.539331    3636 pod_ready.go:81] duration metric: took 3.856623ms for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.539337    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.539378    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m02
	I0717 10:40:07.539383    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.539389    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.539393    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.541081    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.541459    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:07.541467    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.541473    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.541476    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.542992    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.543383    3636 pod_ready.go:92] pod "etcd-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:07.543391    3636 pod_ready.go:81] duration metric: took 4.050033ms for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.543397    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.703505    3636 request.go:629] Waited for 160.066521ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m03
	I0717 10:40:07.703540    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m03
	I0717 10:40:07.703545    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.703551    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.703556    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.705548    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.903510    3636 request.go:629] Waited for 197.511686ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:07.903551    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:07.903556    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.903562    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.903601    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.905857    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:07.906157    3636 pod_ready.go:92] pod "etcd-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:07.906168    3636 pod_ready.go:81] duration metric: took 362.756768ms for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.906180    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:08.103966    3636 request.go:629] Waited for 197.743139ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:40:08.104021    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:40:08.104030    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:08.104037    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:08.104046    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:08.106066    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:08.303534    3636 request.go:629] Waited for 196.774341ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:08.303599    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:08.303671    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:08.303686    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:08.303697    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:08.306313    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:08.306837    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:08.306847    3636 pod_ready.go:81] duration metric: took 400.65093ms for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:08.306854    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:08.503920    3636 request.go:629] Waited for 197.018157ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:40:08.503964    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:40:08.503984    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:08.503990    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:08.503995    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:08.506056    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:08.703436    3636 request.go:629] Waited for 196.948288ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:08.703494    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:08.703500    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:08.703506    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:08.703511    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:08.705852    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:08.706163    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:08.706173    3636 pod_ready.go:81] duration metric: took 399.30321ms for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:08.706179    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:08.903771    3636 request.go:629] Waited for 197.50006ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:40:08.903806    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:40:08.903813    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:08.903820    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:08.903824    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:08.906399    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:09.104084    3636 request.go:629] Waited for 197.163497ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:09.104171    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:09.104176    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:09.104182    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:09.104187    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:09.106361    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:09.106707    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:09.106718    3636 pod_ready.go:81] duration metric: took 400.52413ms for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:09.106726    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:09.304052    3636 request.go:629] Waited for 197.283261ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:40:09.304088    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:40:09.304093    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:09.304130    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:09.304135    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:09.306083    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:09.504106    3636 request.go:629] Waited for 197.645757ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:09.504208    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:09.504220    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:09.504232    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:09.504240    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:09.511286    3636 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 10:40:09.511696    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:09.511709    3636 pod_ready.go:81] duration metric: took 404.967221ms for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:09.511716    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:09.703585    3636 request.go:629] Waited for 191.795231ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:40:09.703642    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:40:09.703653    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:09.703665    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:09.703671    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:09.706720    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:09.904070    3636 request.go:629] Waited for 196.771647ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:09.904118    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:09.904125    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:09.904134    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:09.904140    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:09.906439    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:09.906766    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:09.906776    3636 pod_ready.go:81] duration metric: took 395.046014ms for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:09.906787    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:10.104935    3636 request.go:629] Waited for 198.017235ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:40:10.105019    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:40:10.105031    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:10.105061    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:10.105068    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:10.108223    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:10.304013    3636 request.go:629] Waited for 195.251924ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:10.304073    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:10.304086    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:10.304097    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:10.304106    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:10.307327    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:10.307882    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:10.307891    3636 pod_ready.go:81] duration metric: took 401.08706ms for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:10.307899    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:10.504739    3636 request.go:629] Waited for 196.801571ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:40:10.504780    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:40:10.504821    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:10.504827    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:10.504831    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:10.506960    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:10.703733    3636 request.go:629] Waited for 196.095597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:40:10.703831    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:40:10.703840    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:10.703866    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:10.703875    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:10.706696    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:10.707101    3636 pod_ready.go:92] pod "kube-proxy-5wcph" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:10.707111    3636 pod_ready.go:81] duration metric: took 399.196595ms for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:10.707118    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:10.903773    3636 request.go:629] Waited for 196.61026ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:40:10.903910    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:40:10.903927    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:10.903945    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:10.903955    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:10.906117    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:11.104247    3636 request.go:629] Waited for 197.64653ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:11.104330    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:11.104339    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:11.104351    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:11.104362    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:11.107473    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:11.107930    3636 pod_ready.go:92] pod "kube-proxy-h7k9z" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:11.107945    3636 pod_ready.go:81] duration metric: took 400.810357ms for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:11.107954    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:11.304083    3636 request.go:629] Waited for 196.074281ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:40:11.304132    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:40:11.304139    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:11.304147    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:11.304151    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:11.306391    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:11.503460    3636 request.go:629] Waited for 196.558235ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:11.503507    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:11.503513    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:11.503519    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:11.503523    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:11.505457    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:11.505774    3636 pod_ready.go:92] pod "kube-proxy-hst7h" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:11.505785    3636 pod_ready.go:81] duration metric: took 397.815014ms for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:11.505792    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:11.704821    3636 request.go:629] Waited for 198.981688ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:40:11.704918    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:40:11.704927    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:11.704933    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:11.704936    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:11.707262    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:11.903612    3636 request.go:629] Waited for 195.874248ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:11.903682    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:11.903689    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:11.903696    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:11.903700    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:11.905982    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:11.906348    3636 pod_ready.go:92] pod "kube-proxy-v6jxh" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:11.906359    3636 pod_ready.go:81] duration metric: took 400.551047ms for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:11.906369    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:12.103492    3636 request.go:629] Waited for 197.075685ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:40:12.103568    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:40:12.103574    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:12.103580    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:12.103585    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:12.105506    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:12.303814    3636 request.go:629] Waited for 197.930746ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:12.303844    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:12.303850    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:12.303867    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:12.303874    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:12.305845    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:12.306164    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:12.306174    3636 pod_ready.go:81] duration metric: took 399.787712ms for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:12.306181    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:12.503949    3636 request.go:629] Waited for 197.718801ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:40:12.504068    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:40:12.504079    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:12.504087    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:12.504093    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:12.506372    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:12.704852    3636 request.go:629] Waited for 198.155745ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:12.704924    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:12.704932    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:12.704940    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:12.704944    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:12.707307    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:12.707616    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:12.707626    3636 pod_ready.go:81] duration metric: took 401.429815ms for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:12.707633    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:12.903728    3636 request.go:629] Waited for 196.035029ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:40:12.903828    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:40:12.903836    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:12.903842    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:12.903845    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:12.906224    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:13.103515    3636 request.go:629] Waited for 196.951957ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:13.103588    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:13.103593    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:13.103599    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:13.103603    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:13.105622    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:13.106020    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:13.106029    3636 pod_ready.go:81] duration metric: took 398.380033ms for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:13.106046    3636 pod_ready.go:38] duration metric: took 5.59825813s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:40:13.106061    3636 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 10:40:13.106113    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:40:13.116872    3636 system_svc.go:56] duration metric: took 10.807598ms WaitForService to wait for kubelet
	I0717 10:40:13.116887    3636 kubeadm.go:582] duration metric: took 5.884130758s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:40:13.116904    3636 node_conditions.go:102] verifying NodePressure condition ...
	I0717 10:40:13.303772    3636 request.go:629] Waited for 186.81691ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0717 10:40:13.303803    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0717 10:40:13.303807    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:13.303841    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:13.303846    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:13.306895    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:13.307714    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:40:13.307729    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:40:13.307740    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:40:13.307744    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:40:13.307748    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:40:13.307751    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:40:13.307757    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:40:13.307761    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:40:13.307764    3636 node_conditions.go:105] duration metric: took 190.851869ms to run NodePressure ...
	I0717 10:40:13.307772    3636 start.go:241] waiting for startup goroutines ...
	I0717 10:40:13.307786    3636 start.go:255] writing updated cluster config ...
	I0717 10:40:13.308139    3636 ssh_runner.go:195] Run: rm -f paused
	I0717 10:40:13.349733    3636 start.go:600] kubectl: 1.29.2, cluster: 1.30.2 (minor skew: 1)
	I0717 10:40:13.371543    3636 out.go:177] * Done! kubectl is now configured to use "ha-572000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 17 17:38:43 ha-572000 dockerd[1183]: time="2024-07-17T17:38:43.318326173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:38:43 ha-572000 dockerd[1183]: time="2024-07-17T17:38:43.318386099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:38:43 ha-572000 dockerd[1183]: time="2024-07-17T17:38:43.318398421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:43 ha-572000 dockerd[1183]: time="2024-07-17T17:38:43.318954035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:43 ha-572000 dockerd[1183]: time="2024-07-17T17:38:43.319450280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.340195606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.340255461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.340333620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.340397061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.341315078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.341404694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.341501856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.343515271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.343612113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.343637500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.343972230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.346166794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:45 ha-572000 dockerd[1183]: time="2024-07-17T17:38:45.310104278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:38:45 ha-572000 dockerd[1183]: time="2024-07-17T17:38:45.310177463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:38:45 ha-572000 dockerd[1183]: time="2024-07-17T17:38:45.310195349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:45 ha-572000 dockerd[1183]: time="2024-07-17T17:38:45.310377303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:39:13 ha-572000 dockerd[1176]: time="2024-07-17T17:39:13.526781737Z" level=info msg="ignoring event" container=a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 17:39:13 ha-572000 dockerd[1183]: time="2024-07-17T17:39:13.527422614Z" level=info msg="shim disconnected" id=a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403 namespace=moby
	Jul 17 17:39:13 ha-572000 dockerd[1183]: time="2024-07-17T17:39:13.527577585Z" level=warning msg="cleaning up after shim disconnected" id=a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403 namespace=moby
	Jul 17 17:39:13 ha-572000 dockerd[1183]: time="2024-07-17T17:39:13.527671021Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0544a7b38aa20       cbb01a7bd410d                                                                                         About a minute ago   Running             coredns                   1                   211b5a6515354       coredns-7db6d8ff4d-9dzd5
	2f15e40a181ae       53c535741fb44                                                                                         About a minute ago   Running             kube-proxy                1                   4aab8735c2c04       kube-proxy-hst7h
	a5d6b6937bc80       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   24dc28c9171d4       busybox-fc5497c4f-5r4wl
	90d12ecf2a207       5cc3abe5717db                                                                                         About a minute ago   Running             kindnet-cni               1                   c4ad8ae388e4c       kindnet-t85bv
	a82cf6255e5a9       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   be6e24303245d       storage-provisioner
	22dbe2e88f6f6       cbb01a7bd410d                                                                                         About a minute ago   Running             coredns                   1                   ebfbe4a086eb8       coredns-7db6d8ff4d-2phrp
	d0c5e4f0005b0       e874818b3caac                                                                                         About a minute ago   Running             kube-controller-manager   6                   3143df977771c       kube-controller-manager-ha-572000
	2988c5a570cb1       38af8ddebf499                                                                                         2 minutes ago        Running             kube-vip                  1                   bb35c323d1311       kube-vip-ha-572000
	b589feb3cd968       7820c83aa1394                                                                                         2 minutes ago        Running             kube-scheduler            2                   1f36c956df9c2       kube-scheduler-ha-572000
	c4604d37a9454       3861cfcd7c04c                                                                                         2 minutes ago        Running             etcd                      3                   73d23719d576c       etcd-ha-572000
	490b99a8cd7e0       56ce0fd9fb532                                                                                         2 minutes ago        Running             kube-apiserver            6                   43743c72743dc       kube-apiserver-ha-572000
	caed8fc7c24d9       e874818b3caac                                                                                         2 minutes ago        Exited              kube-controller-manager   5                   3143df977771c       kube-controller-manager-ha-572000
	cd333393aa057       56ce0fd9fb532                                                                                         3 minutes ago        Exited              kube-apiserver            5                   6d7eb0e874999       kube-apiserver-ha-572000
	b6b4ce34842d6       3861cfcd7c04c                                                                                         3 minutes ago        Exited              etcd                      2                   986ceb5a6f870       etcd-ha-572000
	138bf6784d59c       38af8ddebf499                                                                                         7 minutes ago        Exited              kube-vip                  0                   df04438a4c5cc       kube-vip-ha-572000
	a53f8fcdf5d97       7820c83aa1394                                                                                         7 minutes ago        Exited              kube-scheduler            1                   bfd880612991e       kube-scheduler-ha-572000
	e1a5eb1bed550       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   9 minutes ago        Exited              busybox                   0                   29ab413131af2       busybox-fc5497c4f-5r4wl
	bb44d784bb7ab       cbb01a7bd410d                                                                                         12 minutes ago       Exited              coredns                   0                   b8c622f08395f       coredns-7db6d8ff4d-2phrp
	7b275812468c9       cbb01a7bd410d                                                                                         12 minutes ago       Exited              coredns                   0                   2588bd7c40c23       coredns-7db6d8ff4d-9dzd5
	6e40e1427ab20       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              12 minutes ago       Exited              kindnet-cni               0                   32dc836c0a2df       kindnet-t85bv
	2aeed19835352       53c535741fb44                                                                                         12 minutes ago       Exited              kube-proxy                0                   f688e08d591be       kube-proxy-hst7h
	
	
	==> coredns [0544a7b38aa2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47730 - 44649 "HINFO IN 7657991150461714427.6847867729784937660. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009507113s
	
	
	==> coredns [22dbe2e88f6f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50584 - 51756 "HINFO IN 3888167032918365436.646455749640363721. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.007934252s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1469986290]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 17:38:43.514) (total time: 30002ms):
	Trace[1469986290]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:39:13.516)
	Trace[1469986290]: [30.002760682s] [30.002760682s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1457962466]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 17:38:43.515) (total time: 30001ms):
	Trace[1457962466]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:39:13.516)
	Trace[1457962466]: [30.001713432s] [30.001713432s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[94258701]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 17:38:43.514) (total time: 30003ms):
	Trace[94258701]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:39:13.516)
	Trace[94258701]: [30.003582814s] [30.003582814s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [7b275812468c] <==
	[INFO] 10.244.0.4:49035 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001628883s
	[INFO] 10.244.0.4:59665 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081818s
	[INFO] 10.244.0.4:52274 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077378s
	[INFO] 10.244.1.2:47113 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103907s
	[INFO] 10.244.2.2:57796 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060561s
	[INFO] 10.244.2.2:40339 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000425199s
	[INFO] 10.244.2.2:38339 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010468s
	[INFO] 10.244.2.2:50750 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070547s
	[INFO] 10.244.0.4:57426 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000045039s
	[INFO] 10.244.0.4:44283 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031195s
	[INFO] 10.244.1.2:56783 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117359s
	[INFO] 10.244.1.2:34315 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061358s
	[INFO] 10.244.1.2:55792 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062223s
	[INFO] 10.244.2.2:35768 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086426s
	[INFO] 10.244.2.2:42473 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102022s
	[INFO] 10.244.0.4:53524 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000087177s
	[INFO] 10.244.0.4:35224 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079092s
	[INFO] 10.244.0.4:53020 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075585s
	[INFO] 10.244.1.2:58074 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138293s
	[INFO] 10.244.1.2:40567 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000088337s
	[INFO] 10.244.1.2:46301 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000065328s
	[INFO] 10.244.1.2:59898 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075157s
	[INFO] 10.244.2.2:48754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000081888s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bb44d784bb7a] <==
	[INFO] 10.244.0.4:54052 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001019206s
	[INFO] 10.244.0.4:45412 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077905s
	[INFO] 10.244.0.4:37924 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159382s
	[INFO] 10.244.1.2:45862 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108196s
	[INFO] 10.244.1.2:47464 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000616154s
	[INFO] 10.244.1.2:53720 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059338s
	[INFO] 10.244.1.2:49774 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123053s
	[INFO] 10.244.1.2:54472 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000097287s
	[INFO] 10.244.1.2:52054 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064095s
	[INFO] 10.244.1.2:54428 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064142s
	[INFO] 10.244.2.2:53088 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109393s
	[INFO] 10.244.2.2:49993 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000114279s
	[INFO] 10.244.2.2:42708 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063661s
	[INFO] 10.244.2.2:57150 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000043434s
	[INFO] 10.244.0.4:45987 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112397s
	[INFO] 10.244.0.4:44116 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057797s
	[INFO] 10.244.1.2:37635 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105428s
	[INFO] 10.244.2.2:52081 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131248s
	[INFO] 10.244.2.2:54912 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087858s
	[INFO] 10.244.0.4:53981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079367s
	[INFO] 10.244.2.2:45850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152036s
	[INFO] 10.244.2.2:36004 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000223s
	[INFO] 10.244.2.2:54459 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000128834s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-572000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-572000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-572000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T10_27_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:27:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-572000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:40:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:38:16 +0000   Wed, 17 Jul 2024 17:27:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:38:16 +0000   Wed, 17 Jul 2024 17:27:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:38:16 +0000   Wed, 17 Jul 2024 17:27:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:38:16 +0000   Wed, 17 Jul 2024 17:27:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-572000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc4828ff3a4b410d87d0a2c48b8c546d
	  System UUID:                5f264258-0000-0000-9840-7856c1bd4173
	  Boot ID:                    2568bff2-eded-45b6-850c-4c0e9d36f966
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5r4wl              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-2phrp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 coredns-7db6d8ff4d-9dzd5             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 etcd-ha-572000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-t85bv                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-572000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-572000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-hst7h                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-572000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-572000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 91s                    kube-proxy       
	  Normal  Starting                 12m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                    kubelet          Node ha-572000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                    kubelet          Node ha-572000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                    kubelet          Node ha-572000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  NodeReady                12m                    kubelet          Node ha-572000 status is now: NodeReady
	  Normal  RegisteredNode           11m                    node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  RegisteredNode           8m8s                   node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  NodeHasSufficientMemory  2m36s (x8 over 2m36s)  kubelet          Node ha-572000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m36s (x8 over 2m36s)  kubelet          Node ha-572000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m36s (x7 over 2m36s)  kubelet          Node ha-572000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           115s                   node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  RegisteredNode           94s                    node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  RegisteredNode           88s                    node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	
	
	Name:               ha-572000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-572000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-572000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T10_28_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:28:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-572000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:40:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:38:08 +0000   Wed, 17 Jul 2024 17:28:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:38:08 +0000   Wed, 17 Jul 2024 17:28:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:38:08 +0000   Wed, 17 Jul 2024 17:28:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:38:08 +0000   Wed, 17 Jul 2024 17:28:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-572000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 21a94638d6914aaeb48a6d7a895c9b99
	  System UUID:                b5da4916-0000-0000-aec8-9a96c30c8c05
	  Boot ID:                    d3f575b3-f9f0-45ee-bee7-6209fb3d26a4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9sdw5                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-572000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-g2m92                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-572000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-572000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-v6jxh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-572000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-572000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m1s                   kube-proxy       
	  Normal   Starting                 8m21s                  kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-572000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-572000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node ha-572000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                    node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   Starting                 8m24s                  kubelet          Starting kubelet.
	  Warning  Rebooted                 8m24s                  kubelet          Node ha-572000-m02 has been rebooted, boot id: 7661c0d0-1379-4b0e-b101-3961fae1a207
	  Normal   NodeHasSufficientPID     8m24s                  kubelet          Node ha-572000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  8m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m24s                  kubelet          Node ha-572000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m24s                  kubelet          Node ha-572000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           8m8s                   node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   Starting                 2m18s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m17s (x8 over 2m18s)  kubelet          Node ha-572000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m17s (x8 over 2m18s)  kubelet          Node ha-572000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m17s (x7 over 2m18s)  kubelet          Node ha-572000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           115s                   node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   RegisteredNode           94s                    node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   RegisteredNode           88s                    node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	
	
	Name:               ha-572000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-572000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-572000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T10_29_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:29:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-572000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:40:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:38:31 +0000   Wed, 17 Jul 2024 17:29:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:38:31 +0000   Wed, 17 Jul 2024 17:29:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:38:31 +0000   Wed, 17 Jul 2024 17:29:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:38:31 +0000   Wed, 17 Jul 2024 17:30:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-572000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 be52acddd53148cc8c17d6c21c17abf3
	  System UUID:                50644be4-0000-0000-8d75-15b09204e5f5
	  Boot ID:                    f2b4f1ba-89fc-4cd2-a163-0306f2bae7bb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jhz2d                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-572000-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-72zfp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-572000-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-572000-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-h7k9z                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-572000-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-572000-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 101s               kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-572000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-572000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-572000-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           8m8s               node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           115s               node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   Starting                 104s               kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  104s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  104s               kubelet          Node ha-572000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    104s               kubelet          Node ha-572000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     104s               kubelet          Node ha-572000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 104s               kubelet          Node ha-572000-m03 has been rebooted, boot id: f2b4f1ba-89fc-4cd2-a163-0306f2bae7bb
	  Normal   RegisteredNode           94s                node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           88s                node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	
	
	Name:               ha-572000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-572000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-572000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T10_30_40_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:30:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-572000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:40:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:40:07 +0000   Wed, 17 Jul 2024 17:40:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:40:07 +0000   Wed, 17 Jul 2024 17:40:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:40:07 +0000   Wed, 17 Jul 2024 17:40:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:40:07 +0000   Wed, 17 Jul 2024 17:40:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-572000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a064c491460940e4967dc27f529a5ea6
	  System UUID:                d62b4091-0000-0000-a1f9-ae55052b3d93
	  Boot ID:                    9c875bb7-4ccf-49df-b662-ce64a8634436
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5xsrp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m36s
	  kube-system                 kube-proxy-5wcph    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m29s                  kube-proxy       
	  Normal   Starting                 6s                     kube-proxy       
	  Normal   NodeHasSufficientMemory  9m36s (x2 over 9m36s)  kubelet          Node ha-572000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m36s (x2 over 9m36s)  kubelet          Node ha-572000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m36s (x2 over 9m36s)  kubelet          Node ha-572000-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m35s                  node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   NodeAllocatableEnforced  9m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m34s                  node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   RegisteredNode           9m33s                  node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   NodeReady                9m13s                  kubelet          Node ha-572000-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m8s                   node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   RegisteredNode           115s                   node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   RegisteredNode           94s                    node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   RegisteredNode           88s                    node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   NodeNotReady             75s                    node-controller  Node ha-572000-m04 status is now: NodeNotReady
	  Normal   Starting                 8s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)        kubelet          Node ha-572000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)        kubelet          Node ha-572000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)        kubelet          Node ha-572000-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                     kubelet          Node ha-572000-m04 has been rebooted, boot id: 9c875bb7-4ccf-49df-b662-ce64a8634436
	  Normal   NodeReady                8s                     kubelet          Node ha-572000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.035701] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007982] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.369068] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006691] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.635959] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.223787] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.844039] systemd-fstab-generator[468]: Ignoring "noauto" option for root device
	[  +0.100018] systemd-fstab-generator[480]: Ignoring "noauto" option for root device
	[  +1.895052] systemd-fstab-generator[1104]: Ignoring "noauto" option for root device
	[  +0.053692] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.194931] systemd-fstab-generator[1141]: Ignoring "noauto" option for root device
	[  +0.116874] systemd-fstab-generator[1153]: Ignoring "noauto" option for root device
	[  +0.104796] systemd-fstab-generator[1167]: Ignoring "noauto" option for root device
	[  +2.435008] systemd-fstab-generator[1384]: Ignoring "noauto" option for root device
	[  +0.114297] systemd-fstab-generator[1396]: Ignoring "noauto" option for root device
	[  +0.106280] systemd-fstab-generator[1408]: Ignoring "noauto" option for root device
	[  +0.119247] systemd-fstab-generator[1423]: Ignoring "noauto" option for root device
	[  +0.407183] systemd-fstab-generator[1582]: Ignoring "noauto" option for root device
	[  +6.782353] kauditd_printk_skb: 234 callbacks suppressed
	[Jul17 17:38] kauditd_printk_skb: 40 callbacks suppressed
	[ +35.726193] kauditd_printk_skb: 25 callbacks suppressed
	[Jul17 17:39] kauditd_printk_skb: 50 callbacks suppressed
	
	
	==> etcd [b6b4ce34842d] <==
	{"level":"info","ts":"2024-07-17T17:37:06.183089Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-07-17T17:37:07.625159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:07.625381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:07.625508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:07.625789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:07.626021Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:09.125694Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:09.125798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:09.125845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:09.12599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:09.12619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:10.625744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:10.62582Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:10.625858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:10.625915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:10.625982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"warn","ts":"2024-07-17T17:37:11.167194Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f6a6d94fe6ea5bc8","rtt":"0s"}
	{"level":"warn","ts":"2024-07-17T17:37:11.167486Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f6a6d94fe6ea5bc8","rtt":"0s"}
	{"level":"warn","ts":"2024-07-17T17:37:11.185338Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1d3f36ee75516151","rtt":"0s"}
	{"level":"warn","ts":"2024-07-17T17:37:11.185403Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1d3f36ee75516151","rtt":"0s"}
	{"level":"info","ts":"2024-07-17T17:37:12.128113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:12.128317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:12.128469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:12.128888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:12.129376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	
	
	==> etcd [c4604d37a945] <==
	{"level":"warn","ts":"2024-07-17T17:38:22.257766Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:22.317122Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:22.320897Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:22.322427Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:22.357867Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:22.457051Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:23.802501Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"1d3f36ee75516151","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:38:23.802583Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1d3f36ee75516151","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:38:26.684167Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1d3f36ee75516151","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:38:26.684258Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1d3f36ee75516151","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:38:27.804044Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"1d3f36ee75516151","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:38:27.804236Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1d3f36ee75516151","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-17T17:38:30.560424Z","caller":"traceutil/trace.go:171","msg":"trace[1448908815] transaction","detail":"{read_only:false; response_revision:1848; number_of_response:1; }","duration":"129.153763ms","start":"2024-07-17T17:38:30.431252Z","end":"2024-07-17T17:38:30.560406Z","steps":["trace[1448908815] 'process raft request'  (duration: 107.083433ms)","trace[1448908815] 'compare'  (duration: 21.91661ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T17:38:30.59812Z","caller":"traceutil/trace.go:171","msg":"trace[2102820773] transaction","detail":"{read_only:false; response_revision:1849; number_of_response:1; }","duration":"165.419706ms","start":"2024-07-17T17:38:30.432685Z","end":"2024-07-17T17:38:30.598105Z","steps":["trace[2102820773] 'process raft request'  (duration: 165.353536ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:38:31.684736Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1d3f36ee75516151","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:38:31.685061Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1d3f36ee75516151","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:38:31.806282Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"1d3f36ee75516151","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:38:31.80678Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1d3f36ee75516151","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-17T17:38:32.609183Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:38:32.616715Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:38:32.617138Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:38:32.619682Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"1d3f36ee75516151","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-17T17:38:32.619894Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:38:32.624292Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"1d3f36ee75516151","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-17T17:38:32.625462Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151"}
	
	
	==> kernel <==
	 17:40:16 up 2 min,  0 users,  load average: 0.18, 0.09, 0.03
	Linux ha-572000 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6e40e1427ab2] <==
	I0717 17:31:56.892269       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:06.898213       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:06.898253       1 main.go:303] handling current node
	I0717 17:32:06.898264       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:06.898269       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:06.898416       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:06.898443       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:06.898526       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:06.898555       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:16.896377       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:16.896415       1 main.go:303] handling current node
	I0717 17:32:16.896426       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:16.896432       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:16.896606       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:16.896636       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:16.896674       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:16.896699       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:26.896557       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:26.896622       1 main.go:303] handling current node
	I0717 17:32:26.896678       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:26.896718       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:26.896938       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:26.897017       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:26.897158       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:26.897880       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [90d12ecf2a20] <==
	I0717 17:39:45.427615       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:39:55.431585       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:39:55.431619       1 main.go:303] handling current node
	I0717 17:39:55.431633       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:39:55.431639       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:39:55.431782       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:39:55.431791       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:39:55.431847       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:39:55.431854       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:40:05.434801       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:40:05.434852       1 main.go:303] handling current node
	I0717 17:40:05.434866       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:40:05.434873       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:40:05.435156       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:40:05.435194       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:40:05.435277       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:40:05.435363       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:40:15.426184       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:40:15.426228       1 main.go:303] handling current node
	I0717 17:40:15.426238       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:40:15.426243       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:40:15.426375       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:40:15.426402       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:40:15.426512       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:40:15.426539       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [490b99a8cd7e] <==
	I0717 17:38:06.692598       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0717 17:38:06.695172       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 17:38:06.753691       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 17:38:06.754495       1 policy_source.go:224] refreshing policies
	I0717 17:38:06.761461       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 17:38:06.775946       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 17:38:06.777937       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 17:38:06.777967       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 17:38:06.785861       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 17:38:06.785861       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 17:38:06.789965       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 17:38:06.785881       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 17:38:06.790098       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 17:38:06.790136       1 aggregator.go:165] initial CRD sync complete...
	I0717 17:38:06.790141       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 17:38:06.790145       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 17:38:06.790148       1 cache.go:39] Caches are synced for autoregister controller
	W0717 17:38:06.822673       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.7]
	I0717 17:38:06.824170       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 17:38:06.847080       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 17:38:06.894480       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0717 17:38:06.899931       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0717 17:38:07.685599       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0717 17:38:07.910228       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.7]
	W0717 17:38:27.915985       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.6]
	
	
	==> kube-apiserver [cd333393aa05] <==
	I0717 17:37:11.795742       1 options.go:221] external host was not specified, using 192.169.0.5
	I0717 17:37:11.796641       1 server.go:148] Version: v1.30.2
	I0717 17:37:11.796774       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:37:12.098000       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0717 17:37:12.100463       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 17:37:12.102906       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0717 17:37:12.102927       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0717 17:37:12.103040       1 instance.go:299] Using reconciler: lease
	W0717 17:37:13.058091       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:59336->127.0.0.1:2379: read: connection reset by peer"
	W0717 17:37:13.058287       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:59310->127.0.0.1:2379: read: connection reset by peer"
	W0717 17:37:13.058569       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:59320->127.0.0.1:2379: read: connection reset by peer"
	
	
	==> kube-controller-manager [caed8fc7c24d] <==
	I0717 17:37:47.127601       1 serving.go:380] Generated self-signed cert in-memory
	I0717 17:37:47.646900       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0717 17:37:47.646935       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:37:47.649809       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0717 17:37:47.649838       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 17:37:47.650220       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 17:37:47.649847       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0717 17:38:07.655360       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-n
amespaces-controller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [d0c5e4f0005b] <==
	I0717 17:38:41.355830       1 shared_informer.go:320] Caches are synced for PVC protection
	I0717 17:38:41.359928       1 shared_informer.go:320] Caches are synced for GC
	I0717 17:38:41.362350       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0717 17:38:41.364853       1 shared_informer.go:320] Caches are synced for ephemeral
	I0717 17:38:41.366792       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0717 17:38:41.424626       1 shared_informer.go:320] Caches are synced for cronjob
	I0717 17:38:41.432004       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0717 17:38:41.511531       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 17:38:41.518940       1 shared_informer.go:320] Caches are synced for daemon sets
	I0717 17:38:41.541830       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 17:38:41.550619       1 shared_informer.go:320] Caches are synced for stateful set
	I0717 17:38:41.975157       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 17:38:41.982462       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 17:38:41.982520       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 17:38:43.635302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.818µs"
	I0717 17:38:44.733712       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.810534ms"
	I0717 17:38:44.734043       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.445µs"
	I0717 17:38:45.721419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.76µs"
	I0717 17:38:45.768611       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-9v69m EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-9v69m\": the object has been modified; please apply your changes to the latest version and try again"
	I0717 17:38:45.771754       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"7c540b68-a08e-44ac-9c69-ea596263c8eb", APIVersion:"v1", ResourceVersion:"260", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-9v69m EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-9v69m": the object has been modified; please apply your changes to the latest version and try again
	I0717 17:38:45.781131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.861246ms"
	I0717 17:38:45.781831       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.528µs"
	I0717 17:39:19.551280       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.494894ms"
	I0717 17:39:19.551568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="81.124µs"
	I0717 17:40:07.684329       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-572000-m04"
	
	
	==> kube-proxy [2aeed1983535] <==
	I0717 17:27:43.315695       1 server_linux.go:69] "Using iptables proxy"
	I0717 17:27:43.322673       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0717 17:27:43.354011       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:27:43.354032       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:27:43.354044       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:27:43.355997       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:27:43.356216       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:27:43.356225       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:27:43.356874       1 config.go:192] "Starting service config controller"
	I0717 17:27:43.356903       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:27:43.356921       1 config.go:319] "Starting node config controller"
	I0717 17:27:43.356943       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:27:43.357077       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:27:43.357144       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:27:43.457513       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 17:27:43.457607       1 shared_informer.go:320] Caches are synced for node config
	I0717 17:27:43.457639       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [2f15e40a181a] <==
	I0717 17:38:44.762819       1 server_linux.go:69] "Using iptables proxy"
	I0717 17:38:44.783856       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0717 17:38:44.830838       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:38:44.830870       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:38:44.830884       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:38:44.834309       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:38:44.834864       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:38:44.834894       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:38:44.836964       1 config.go:192] "Starting service config controller"
	I0717 17:38:44.837593       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:38:44.837672       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:38:44.837678       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:38:44.839841       1 config.go:319] "Starting node config controller"
	I0717 17:38:44.839870       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:38:44.938549       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 17:38:44.938751       1 shared_informer.go:320] Caches are synced for service config
	I0717 17:38:44.940510       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a53f8fcdf5d9] <==
	E0717 17:36:41.264926       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:36:42.998657       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:36:42.998862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:36:43.326673       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:36:43.327166       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:36:45.184656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:36:45.185412       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:36:52.182490       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:36:52.182723       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:37:00.423142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:00.423274       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:37:01.259659       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:01.260400       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:37:02.377758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:02.378082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:37:08.932628       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:08.932761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:37:09.428412       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:09.428505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:13.065507       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0717 17:37:13.067197       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0717 17:37:13.067371       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0717 17:37:13.067559       1 shared_informer.go:316] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 17:37:13.067604       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0717 17:37:13.067950       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b589feb3cd96] <==
	I0717 17:37:47.052011       1 serving.go:380] Generated self-signed cert in-memory
	W0717 17:37:57.430329       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0717 17:37:57.430356       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 17:37:57.430361       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 17:38:06.715078       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 17:38:06.715131       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:38:06.719828       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 17:38:06.720025       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 17:38:06.720059       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 17:38:06.720073       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 17:38:06.820740       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 17:38:39 ha-572000 kubelet[1589]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:38:39 ha-572000 kubelet[1589]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:38:39 ha-572000 kubelet[1589]: I0717 17:38:39.302284    1589 scope.go:117] "RemoveContainer" containerID="9200160f355ce6c552f980f7ed46283a5abfcee202d68ed4d026b62b5f09378f"
	Jul 17 17:38:43 ha-572000 kubelet[1589]: I0717 17:38:43.248499    1589 scope.go:117] "RemoveContainer" containerID="bb44d784bb7ab822072739958ae678f3a02d43caf6fe9538c0f06ebef18ea342"
	Jul 17 17:38:43 ha-572000 kubelet[1589]: I0717 17:38:43.249450    1589 scope.go:117] "RemoveContainer" containerID="12ba2e181ee9ae3666a5ca0e759c24d2ccb54439a79a38efff74cf14a40e784a"
	Jul 17 17:38:44 ha-572000 kubelet[1589]: I0717 17:38:44.247542    1589 scope.go:117] "RemoveContainer" containerID="6e40e1427ab20e20a4e59edefca31cfa827b45b6f6b76ae115559d4affa80801"
	Jul 17 17:38:44 ha-572000 kubelet[1589]: I0717 17:38:44.247737    1589 scope.go:117] "RemoveContainer" containerID="2aeed19835352538242328918de029a46e7a1c2c0337d634b785ef7be5db5332"
	Jul 17 17:38:44 ha-572000 kubelet[1589]: I0717 17:38:44.248176    1589 scope.go:117] "RemoveContainer" containerID="e1a5eb1bed550849fe01b413e967df27558ab752f138608980b41a250955e5cb"
	Jul 17 17:38:45 ha-572000 kubelet[1589]: I0717 17:38:45.248442    1589 scope.go:117] "RemoveContainer" containerID="7b275812468c9bd27f22db306363aca5bc7fa0141fc09681bf430d6ef78fe048"
	Jul 17 17:39:13 ha-572000 kubelet[1589]: I0717 17:39:13.938023    1589 scope.go:117] "RemoveContainer" containerID="12ba2e181ee9ae3666a5ca0e759c24d2ccb54439a79a38efff74cf14a40e784a"
	Jul 17 17:39:13 ha-572000 kubelet[1589]: I0717 17:39:13.938223    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:39:13 ha-572000 kubelet[1589]: E0717 17:39:13.938325    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	Jul 17 17:39:28 ha-572000 kubelet[1589]: I0717 17:39:28.248196    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:39:28 ha-572000 kubelet[1589]: E0717 17:39:28.248343    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	Jul 17 17:39:39 ha-572000 kubelet[1589]: E0717 17:39:39.270524    1589 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:39:39 ha-572000 kubelet[1589]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:39:39 ha-572000 kubelet[1589]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:39:39 ha-572000 kubelet[1589]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:39:39 ha-572000 kubelet[1589]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:39:43 ha-572000 kubelet[1589]: I0717 17:39:43.248697    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:39:43 ha-572000 kubelet[1589]: E0717 17:39:43.249374    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	Jul 17 17:39:54 ha-572000 kubelet[1589]: I0717 17:39:54.247534    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:39:54 ha-572000 kubelet[1589]: E0717 17:39:54.248369    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	Jul 17 17:40:07 ha-572000 kubelet[1589]: I0717 17:40:07.247771    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:40:07 ha-572000 kubelet[1589]: E0717 17:40:07.248147    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-572000 -n ha-572000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-572000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (177.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (6.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (2.807044808s)
ha_test.go:413: expected profile "ha-572000" in json of 'profile list' to have "Degraded" status but have "HAppy" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-572000\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-572000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-572000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Kubern
etesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":fal
se,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\
"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-572000 -n ha-572000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-572000 logs -n 25: (3.352498984s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-572000 cp ha-572000-m03:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04:/home/docker/cp-test_ha-572000-m03_ha-572000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m04 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m03_ha-572000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-572000 cp testdata/cp-test.txt                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3497274976/001/cp-test_ha-572000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000:/home/docker/cp-test_ha-572000-m04_ha-572000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000 sudo cat                                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m02:/home/docker/cp-test_ha-572000-m04_ha-572000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m02 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m03:/home/docker/cp-test_ha-572000-m04_ha-572000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m03 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-572000 node stop m02 -v=7                                                                                                 | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-572000 node start m02 -v=7                                                                                                | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:32 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-572000 -v=7                                                                                                       | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-572000 -v=7                                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT | 17 Jul 24 10:32 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-572000 --wait=true -v=7                                                                                                | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-572000                                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:34 PDT |                     |
	| node    | ha-572000 node delete m03 -v=7                                                                                               | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:34 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-572000 stop -v=7                                                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:34 PDT | 17 Jul 24 10:37 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-572000 --wait=true                                                                                                     | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:37 PDT | 17 Jul 24 10:40 PDT |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 10:37:21
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 10:37:21.160279    3636 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:37:21.160444    3636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:37:21.160449    3636 out.go:304] Setting ErrFile to fd 2...
	I0717 10:37:21.160453    3636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:37:21.160640    3636 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	I0717 10:37:21.162037    3636 out.go:298] Setting JSON to false
	I0717 10:37:21.184380    3636 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2212,"bootTime":1721235629,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0717 10:37:21.184474    3636 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:37:21.206845    3636 out.go:177] * [ha-572000] minikube v1.33.1 on Darwin 14.5
	I0717 10:37:21.250316    3636 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:37:21.250374    3636 notify.go:220] Checking for updates...
	I0717 10:37:21.294243    3636 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:37:21.315083    3636 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 10:37:21.336268    3636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:37:21.357529    3636 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	I0717 10:37:21.379368    3636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:37:21.401138    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:21.401903    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:21.401985    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:21.411459    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51932
	I0717 10:37:21.411825    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:21.412241    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:21.412256    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:21.412501    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:21.412634    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:21.412826    3636 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:37:21.413099    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:21.413120    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:21.421537    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51934
	I0717 10:37:21.421880    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:21.422209    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:21.422224    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:21.422446    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:21.422563    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:21.451265    3636 out.go:177] * Using the hyperkit driver based on existing profile
	I0717 10:37:21.493400    3636 start.go:297] selected driver: hyperkit
	I0717 10:37:21.493425    3636 start.go:901] validating driver "hyperkit" against &{Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclas
s:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:37:21.493682    3636 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:37:21.493865    3636 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:37:21.494086    3636 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19283-1099/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0717 10:37:21.503763    3636 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0717 10:37:21.507648    3636 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:21.507668    3636 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0717 10:37:21.510386    3636 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:37:21.510420    3636 cni.go:84] Creating CNI manager for ""
	I0717 10:37:21.510429    3636 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 10:37:21.510503    3636 start.go:340] cluster config:
	{Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:37:21.510603    3636 iso.go:125] acquiring lock: {Name:mkf51f842bcc8a77e9c7c50d642c4c76848e96af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:37:21.554326    3636 out.go:177] * Starting "ha-572000" primary control-plane node in "ha-572000" cluster
	I0717 10:37:21.575453    3636 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:37:21.575524    3636 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0717 10:37:21.575584    3636 cache.go:56] Caching tarball of preloaded images
	I0717 10:37:21.575806    3636 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:37:21.575825    3636 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:37:21.576014    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:37:21.577007    3636 start.go:360] acquireMachinesLock for ha-572000: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:37:21.577135    3636 start.go:364] duration metric: took 100.667µs to acquireMachinesLock for "ha-572000"
	I0717 10:37:21.577166    3636 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:37:21.577183    3636 fix.go:54] fixHost starting: 
	I0717 10:37:21.577591    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:21.577617    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:21.586612    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51936
	I0717 10:37:21.586997    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:21.587342    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:21.587357    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:21.587563    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:21.587707    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:21.587805    3636 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:37:21.587906    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:21.587984    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3521
	I0717 10:37:21.588936    3636 fix.go:112] recreateIfNeeded on ha-572000: state=Stopped err=<nil>
	I0717 10:37:21.588955    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:21.588954    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 3521 missing from process table
	W0717 10:37:21.589054    3636 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:37:21.631187    3636 out.go:177] * Restarting existing hyperkit VM for "ha-572000" ...
	I0717 10:37:21.652411    3636 main.go:141] libmachine: (ha-572000) Calling .Start
	I0717 10:37:21.652671    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:21.652780    3636 main.go:141] libmachine: (ha-572000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid
	I0717 10:37:21.654451    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 3521 missing from process table
	I0717 10:37:21.654462    3636 main.go:141] libmachine: (ha-572000) DBG | pid 3521 is in state "Stopped"
	I0717 10:37:21.654497    3636 main.go:141] libmachine: (ha-572000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid...
	I0717 10:37:21.654867    3636 main.go:141] libmachine: (ha-572000) DBG | Using UUID 5f2666de-0b32-4258-9840-7856c1bd4173
	I0717 10:37:21.763705    3636 main.go:141] libmachine: (ha-572000) DBG | Generated MAC d2:a6:10:ad:80:98
	I0717 10:37:21.763739    3636 main.go:141] libmachine: (ha-572000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:37:21.763844    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f2666de-0b32-4258-9840-7856c1bd4173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8a80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:37:21.763875    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f2666de-0b32-4258-9840-7856c1bd4173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8a80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:37:21.763912    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5f2666de-0b32-4258-9840-7856c1bd4173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/ha-572000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:37:21.763957    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5f2666de-0b32-4258-9840-7856c1bd4173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/ha-572000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:37:21.763980    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:37:21.765595    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: Pid is 3650
	I0717 10:37:21.766010    3636 main.go:141] libmachine: (ha-572000) DBG | Attempt 0
	I0717 10:37:21.766020    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:21.766092    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3650
	I0717 10:37:21.767880    3636 main.go:141] libmachine: (ha-572000) DBG | Searching for d2:a6:10:ad:80:98 in /var/db/dhcpd_leases ...
	I0717 10:37:21.767940    3636 main.go:141] libmachine: (ha-572000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:37:21.767961    3636 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x669951d0}
	I0717 10:37:21.767972    3636 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669951be}
	I0717 10:37:21.767977    3636 main.go:141] libmachine: (ha-572000) DBG | Found match: d2:a6:10:ad:80:98
	I0717 10:37:21.767984    3636 main.go:141] libmachine: (ha-572000) DBG | IP: 192.169.0.5
	I0717 10:37:21.768041    3636 main.go:141] libmachine: (ha-572000) Calling .GetConfigRaw
	I0717 10:37:21.768653    3636 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:37:21.768835    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:37:21.769276    3636 machine.go:94] provisionDockerMachine start ...
	I0717 10:37:21.769288    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:21.769440    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:21.769559    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:21.769675    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:21.769782    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:21.769886    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:21.770036    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:21.770285    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:21.770298    3636 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:37:21.773346    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:37:21.825199    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:37:21.825892    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:37:21.825902    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:37:21.825909    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:37:21.825917    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:37:22.200252    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:37:22.200268    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:37:22.314927    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:37:22.314948    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:37:22.314982    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:37:22.314999    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:37:22.315852    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:37:22.315864    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:37:27.580528    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:27 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:37:27.580565    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:27 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:37:27.580573    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:27 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:37:27.604198    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:27 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:37:32.830003    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:37:32.830021    3636 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:37:32.830158    3636 buildroot.go:166] provisioning hostname "ha-572000"
	I0717 10:37:32.830170    3636 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:37:32.830268    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:32.830359    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:32.830451    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:32.830548    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:32.830646    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:32.830800    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:32.830958    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:32.830967    3636 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000 && echo "ha-572000" | sudo tee /etc/hostname
	I0717 10:37:32.892396    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000
	
	I0717 10:37:32.892414    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:32.892535    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:32.892617    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:32.892697    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:32.892768    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:32.892926    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:32.893069    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:32.893080    3636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:37:32.952066    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:37:32.952086    3636 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:37:32.952098    3636 buildroot.go:174] setting up certificates
	I0717 10:37:32.952109    3636 provision.go:84] configureAuth start
	I0717 10:37:32.952116    3636 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:37:32.952255    3636 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:37:32.952365    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:32.952464    3636 provision.go:143] copyHostCerts
	I0717 10:37:32.952503    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:37:32.952585    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:37:32.952594    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:37:32.952749    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:37:32.952965    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:37:32.953012    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:37:32.953018    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:37:32.953117    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:37:32.953281    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:37:32.953328    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:37:32.953333    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:37:32.953420    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:37:32.953574    3636 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000 san=[127.0.0.1 192.169.0.5 ha-572000 localhost minikube]
	I0717 10:37:33.013099    3636 provision.go:177] copyRemoteCerts
	I0717 10:37:33.013145    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:37:33.013161    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:33.013272    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:33.013371    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.013543    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:33.013682    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:33.045521    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:37:33.045593    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:37:33.064633    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:37:33.064699    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0717 10:37:33.084163    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:37:33.084229    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:37:33.103388    3636 provision.go:87] duration metric: took 151.262739ms to configureAuth
	I0717 10:37:33.103401    3636 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:37:33.103573    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:33.103587    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:33.103711    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:33.103809    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:33.103896    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.103977    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.104077    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:33.104181    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:33.104316    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:33.104324    3636 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:37:33.156434    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:37:33.156448    3636 buildroot.go:70] root file system type: tmpfs
	I0717 10:37:33.156525    3636 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:37:33.156537    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:33.156662    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:33.156743    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.156842    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.156931    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:33.157047    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:33.157186    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:33.157233    3636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:37:33.218680    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:37:33.218702    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:33.218866    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:33.218955    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.219056    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.219143    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:33.219283    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:33.219430    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:33.219443    3636 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:37:34.829521    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:37:34.829537    3636 machine.go:97] duration metric: took 13.059920588s to provisionDockerMachine
	I0717 10:37:34.829550    3636 start.go:293] postStartSetup for "ha-572000" (driver="hyperkit")
	I0717 10:37:34.829558    3636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:37:34.829569    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:34.829747    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:37:34.829763    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:34.829864    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:34.829977    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:34.830076    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:34.830154    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:34.863781    3636 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:37:34.867753    3636 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:37:34.867768    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:37:34.867875    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:37:34.868074    3636 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:37:34.868081    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:37:34.868294    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:37:34.881801    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:37:34.912172    3636 start.go:296] duration metric: took 82.609841ms for postStartSetup
	I0717 10:37:34.912193    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:34.912376    3636 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:37:34.912397    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:34.912490    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:34.912588    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:34.912689    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:34.912778    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:34.946140    3636 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:37:34.946199    3636 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:37:34.999470    3636 fix.go:56] duration metric: took 13.421948957s for fixHost
	I0717 10:37:34.999494    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:34.999648    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:34.999748    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:34.999850    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:34.999944    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:35.000069    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:35.000221    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:35.000229    3636 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:37:35.051085    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237854.922867132
	
	I0717 10:37:35.051099    3636 fix.go:216] guest clock: 1721237854.922867132
	I0717 10:37:35.051112    3636 fix.go:229] Guest: 2024-07-17 10:37:34.922867132 -0700 PDT Remote: 2024-07-17 10:37:34.999482 -0700 PDT m=+13.873438456 (delta=-76.614868ms)
	I0717 10:37:35.051130    3636 fix.go:200] guest clock delta is within tolerance: -76.614868ms
	I0717 10:37:35.051134    3636 start.go:83] releasing machines lock for "ha-572000", held for 13.473647062s
	I0717 10:37:35.051154    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:35.051301    3636 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:37:35.051418    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:35.051739    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:35.051853    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:35.051967    3636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:37:35.051989    3636 ssh_runner.go:195] Run: cat /version.json
	I0717 10:37:35.051998    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:35.052000    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:35.052101    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:35.052120    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:35.052207    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:35.052223    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:35.052289    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:35.052308    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:35.052381    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:35.052403    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:35.080899    3636 ssh_runner.go:195] Run: systemctl --version
	I0717 10:37:35.132487    3636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 10:37:35.137302    3636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:37:35.137349    3636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:37:35.150408    3636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:37:35.150420    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:37:35.150523    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:37:35.166824    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:37:35.175726    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:37:35.184531    3636 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:37:35.184576    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:37:35.193352    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:37:35.202047    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:37:35.210925    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:37:35.219775    3636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:37:35.228824    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:37:35.237746    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:37:35.246520    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:37:35.255409    3636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:37:35.263547    3636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:37:35.271637    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:35.370819    3636 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:37:35.385762    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:37:35.385839    3636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:37:35.397460    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:37:35.408605    3636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:37:35.423025    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:37:35.433954    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:37:35.444983    3636 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:37:35.462789    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:37:35.474320    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:37:35.491905    3636 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:37:35.494848    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:37:35.502963    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:37:35.516602    3636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:37:35.626759    3636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:37:35.732422    3636 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:37:35.732511    3636 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:37:35.746415    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:35.837452    3636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:37:38.134243    3636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.296714656s)
	I0717 10:37:38.134309    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:37:38.145497    3636 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 10:37:38.159451    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:37:38.170560    3636 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:37:38.274400    3636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:37:38.385610    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:38.490247    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:37:38.502358    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:37:38.513179    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:38.610828    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:37:38.675050    3636 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:37:38.675129    3636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:37:38.679555    3636 start.go:563] Will wait 60s for crictl version
	I0717 10:37:38.679605    3636 ssh_runner.go:195] Run: which crictl
	I0717 10:37:38.682545    3636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:37:38.707789    3636 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:37:38.707873    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:37:38.724822    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:37:38.769236    3636 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:37:38.769287    3636 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:37:38.769657    3636 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:37:38.774296    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:37:38.784075    3636 kubeadm.go:883] updating cluster {Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 10:37:38.784175    3636 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:37:38.784231    3636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 10:37:38.798317    3636 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0717 10:37:38.798329    3636 docker.go:615] Images already preloaded, skipping extraction
	I0717 10:37:38.798398    3636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 10:37:38.810938    3636 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0717 10:37:38.810957    3636 cache_images.go:84] Images are preloaded, skipping loading
	I0717 10:37:38.810966    3636 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.30.2 docker true true} ...
	I0717 10:37:38.811048    3636 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:37:38.811115    3636 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 10:37:38.829256    3636 cni.go:84] Creating CNI manager for ""
	I0717 10:37:38.829269    3636 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 10:37:38.829280    3636 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 10:37:38.829295    3636 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-572000 NodeName:ha-572000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 10:37:38.829373    3636 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-572000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 10:37:38.829387    3636 kube-vip.go:115] generating kube-vip config ...
	I0717 10:37:38.829437    3636 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 10:37:38.842048    3636 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 10:37:38.842112    3636 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 10:37:38.842157    3636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:37:38.849945    3636 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:37:38.849994    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 10:37:38.857243    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0717 10:37:38.870596    3636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:37:38.883936    3636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0717 10:37:38.897367    3636 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0717 10:37:38.910809    3636 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0717 10:37:38.913705    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:37:38.922873    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:39.030583    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:37:39.043433    3636 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.5
	I0717 10:37:39.043445    3636 certs.go:194] generating shared ca certs ...
	I0717 10:37:39.043456    3636 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:37:39.043642    3636 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:37:39.043720    3636 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:37:39.043730    3636 certs.go:256] generating profile certs ...
	I0717 10:37:39.043839    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key
	I0717 10:37:39.043918    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7
	I0717 10:37:39.043992    3636 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key
	I0717 10:37:39.043999    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:37:39.044021    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:37:39.044039    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:37:39.044057    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:37:39.044074    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 10:37:39.044104    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 10:37:39.044133    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 10:37:39.044152    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 10:37:39.044248    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:37:39.044296    3636 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:37:39.044310    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:37:39.044353    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:37:39.044397    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:37:39.044448    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:37:39.044541    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:37:39.044586    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:39.044607    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:37:39.044626    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:37:39.045107    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:37:39.076893    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:37:39.102499    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:37:39.129749    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:37:39.155627    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 10:37:39.180179    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 10:37:39.210181    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 10:37:39.264808    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 10:37:39.318806    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:37:39.365954    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:37:39.390620    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:37:39.410051    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 10:37:39.423408    3636 ssh_runner.go:195] Run: openssl version
	I0717 10:37:39.427605    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:37:39.436575    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:39.439804    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:39.439837    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:39.443971    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:37:39.452794    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:37:39.461667    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:37:39.464961    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:37:39.465002    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:37:39.469065    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:37:39.477903    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:37:39.486816    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:37:39.490121    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:37:39.490162    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:37:39.494244    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:37:39.503378    3636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:37:39.506714    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 10:37:39.510953    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 10:37:39.515092    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 10:37:39.519272    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 10:37:39.523407    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 10:37:39.527554    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 10:37:39.531780    3636 kubeadm.go:392] StartCluster: {Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:37:39.531904    3636 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 10:37:39.544965    3636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 10:37:39.553126    3636 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 10:37:39.553138    3636 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 10:37:39.553178    3636 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 10:37:39.561206    3636 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 10:37:39.561518    3636 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-572000" does not appear in /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:37:39.561607    3636 kubeconfig.go:62] /Users/jenkins/minikube-integration/19283-1099/kubeconfig needs updating (will repair): [kubeconfig missing "ha-572000" cluster setting kubeconfig missing "ha-572000" context setting]
	I0717 10:37:39.561822    3636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:37:39.562469    3636 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:37:39.562674    3636 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa0a8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 10:37:39.562998    3636 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 10:37:39.563178    3636 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 10:37:39.570855    3636 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0717 10:37:39.570867    3636 kubeadm.go:597] duration metric: took 17.724744ms to restartPrimaryControlPlane
	I0717 10:37:39.570878    3636 kubeadm.go:394] duration metric: took 39.101036ms to StartCluster
	I0717 10:37:39.570889    3636 settings.go:142] acquiring lock: {Name:mkc45f011a907c66e2dbca7dadfff37ab48f7d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:37:39.570961    3636 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:37:39.571333    3636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:37:39.571564    3636 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:37:39.571579    3636 start.go:241] waiting for startup goroutines ...
	I0717 10:37:39.571583    3636 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 10:37:39.571709    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:39.622273    3636 out.go:177] * Enabled addons: 
	I0717 10:37:39.644517    3636 addons.go:510] duration metric: took 72.937257ms for enable addons: enabled=[]
	I0717 10:37:39.644554    3636 start.go:246] waiting for cluster config update ...
	I0717 10:37:39.644589    3636 start.go:255] writing updated cluster config ...
	I0717 10:37:39.667630    3636 out.go:177] 
	I0717 10:37:39.689827    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:39.689958    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:37:39.712261    3636 out.go:177] * Starting "ha-572000-m02" control-plane node in "ha-572000" cluster
	I0717 10:37:39.754151    3636 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:37:39.754211    3636 cache.go:56] Caching tarball of preloaded images
	I0717 10:37:39.754408    3636 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:37:39.754427    3636 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:37:39.754564    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:37:39.755532    3636 start.go:360] acquireMachinesLock for ha-572000-m02: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:37:39.755656    3636 start.go:364] duration metric: took 98.999µs to acquireMachinesLock for "ha-572000-m02"
	I0717 10:37:39.755680    3636 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:37:39.755687    3636 fix.go:54] fixHost starting: m02
	I0717 10:37:39.756121    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:39.756167    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:39.765321    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51958
	I0717 10:37:39.765669    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:39.765987    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:39.765996    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:39.766231    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:39.766367    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:39.766465    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetState
	I0717 10:37:39.766561    3636 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:39.766639    3636 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3526
	I0717 10:37:39.767558    3636 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3526 missing from process table
	I0717 10:37:39.767584    3636 fix.go:112] recreateIfNeeded on ha-572000-m02: state=Stopped err=<nil>
	I0717 10:37:39.767592    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	W0717 10:37:39.767681    3636 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:37:39.811253    3636 out.go:177] * Restarting existing hyperkit VM for "ha-572000-m02" ...
	I0717 10:37:39.832179    3636 main.go:141] libmachine: (ha-572000-m02) Calling .Start
	I0717 10:37:39.832337    3636 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:39.832362    3636 main.go:141] libmachine: (ha-572000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid
	I0717 10:37:39.833334    3636 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3526 missing from process table
	I0717 10:37:39.833343    3636 main.go:141] libmachine: (ha-572000-m02) DBG | pid 3526 is in state "Stopped"
	I0717 10:37:39.833355    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid...
	I0717 10:37:39.833536    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Using UUID b5da5881-83da-4916-aec8-9a96c30c8c05
	I0717 10:37:39.859749    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Generated MAC 2:60:33:0:68:8b
	I0717 10:37:39.859777    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:37:39.859978    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b5da5881-83da-4916-aec8-9a96c30c8c05", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c0ae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:37:39.860020    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b5da5881-83da-4916-aec8-9a96c30c8c05", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c0ae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:37:39.860096    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b5da5881-83da-4916-aec8-9a96c30c8c05", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/ha-572000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machine
s/ha-572000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:37:39.860169    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b5da5881-83da-4916-aec8-9a96c30c8c05 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/ha-572000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:37:39.860189    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:37:39.861788    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: Pid is 3657
	I0717 10:37:39.862251    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Attempt 0
	I0717 10:37:39.862268    3636 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:39.862355    3636 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3657
	I0717 10:37:39.864079    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Searching for 2:60:33:0:68:8b in /var/db/dhcpd_leases ...
	I0717 10:37:39.864121    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:37:39.864142    3636 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669952da}
	I0717 10:37:39.864158    3636 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x669951d0}
	I0717 10:37:39.864182    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Found match: 2:60:33:0:68:8b
	I0717 10:37:39.864197    3636 main.go:141] libmachine: (ha-572000-m02) DBG | IP: 192.169.0.6
	I0717 10:37:39.864229    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetConfigRaw
	I0717 10:37:39.865013    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:37:39.865242    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:37:39.865841    3636 machine.go:94] provisionDockerMachine start ...
	I0717 10:37:39.865853    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:39.866023    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:39.866160    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:39.866271    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:39.866402    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:39.866505    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:39.866622    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:39.866842    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:39.866854    3636 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:37:39.869683    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:37:39.878483    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:37:39.879603    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:37:39.879617    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:37:39.879624    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:37:39.879629    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:37:40.255889    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:37:40.255907    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:37:40.370491    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:37:40.370510    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:37:40.370520    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:37:40.370527    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:37:40.371371    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:37:40.371379    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:37:45.614184    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:45 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:37:45.614198    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:45 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:37:45.614209    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:45 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:37:45.638128    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:45 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:37:50.925250    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:37:50.925264    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:37:50.925388    3636 buildroot.go:166] provisioning hostname "ha-572000-m02"
	I0717 10:37:50.925396    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:37:50.925487    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:50.925569    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:50.925664    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:50.925753    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:50.925857    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:50.925992    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:50.926145    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:50.926154    3636 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000-m02 && echo "ha-572000-m02" | sudo tee /etc/hostname
	I0717 10:37:50.991059    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000-m02
	
	I0717 10:37:50.991079    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:50.991219    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:50.991316    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:50.991401    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:50.991492    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:50.991638    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:50.991791    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:50.991803    3636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:37:51.051090    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:37:51.051108    3636 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:37:51.051119    3636 buildroot.go:174] setting up certificates
	I0717 10:37:51.051126    3636 provision.go:84] configureAuth start
	I0717 10:37:51.051132    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:37:51.051276    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:37:51.051370    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:51.051458    3636 provision.go:143] copyHostCerts
	I0717 10:37:51.051492    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:37:51.051538    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:37:51.051544    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:37:51.051674    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:37:51.051883    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:37:51.051914    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:37:51.051919    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:37:51.052017    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:37:51.052173    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:37:51.052202    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:37:51.052207    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:37:51.052377    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:37:51.052529    3636 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000-m02 san=[127.0.0.1 192.169.0.6 ha-572000-m02 localhost minikube]
	I0717 10:37:51.118183    3636 provision.go:177] copyRemoteCerts
	I0717 10:37:51.118227    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:37:51.118240    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:51.118378    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:51.118485    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.118583    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:51.118673    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:37:51.152061    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:37:51.152130    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:37:51.171745    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:37:51.171819    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 10:37:51.192673    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:37:51.192744    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:37:51.212788    3636 provision.go:87] duration metric: took 161.649391ms to configureAuth
	I0717 10:37:51.212802    3636 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:37:51.212965    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:51.212978    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:51.213112    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:51.213224    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:51.213316    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.213411    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.213499    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:51.213614    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:51.213748    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:51.213755    3636 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:37:51.269367    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:37:51.269384    3636 buildroot.go:70] root file system type: tmpfs
	I0717 10:37:51.269468    3636 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:37:51.269484    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:51.269663    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:51.269800    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.269888    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.269973    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:51.270120    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:51.270267    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:51.270313    3636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:37:51.334311    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:37:51.334330    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:51.334460    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:51.334550    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.334644    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.334739    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:51.334864    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:51.335013    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:51.335026    3636 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:37:52.973251    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:37:52.973265    3636 machine.go:97] duration metric: took 13.107082478s to provisionDockerMachine
	I0717 10:37:52.973273    3636 start.go:293] postStartSetup for "ha-572000-m02" (driver="hyperkit")
	I0717 10:37:52.973280    3636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:37:52.973291    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:52.973486    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:37:52.973497    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:52.973604    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:52.973699    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:52.973791    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:52.973882    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:37:53.016888    3636 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:37:53.020683    3636 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:37:53.020693    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:37:53.020793    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:37:53.020968    3636 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:37:53.020974    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:37:53.021167    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:37:53.029813    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:37:53.057224    3636 start.go:296] duration metric: took 83.939886ms for postStartSetup
	I0717 10:37:53.057245    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:53.057420    3636 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:37:53.057442    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:53.057549    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:53.057634    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:53.057729    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:53.057811    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:37:53.091296    3636 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:37:53.091355    3636 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:37:53.145297    3636 fix.go:56] duration metric: took 13.389268028s for fixHost
	I0717 10:37:53.145323    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:53.145457    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:53.145570    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:53.145662    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:53.145747    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:53.145888    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:53.146033    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:53.146041    3636 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:37:53.200266    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237873.035451058
	
	I0717 10:37:53.200279    3636 fix.go:216] guest clock: 1721237873.035451058
	I0717 10:37:53.200284    3636 fix.go:229] Guest: 2024-07-17 10:37:53.035451058 -0700 PDT Remote: 2024-07-17 10:37:53.145313 -0700 PDT m=+32.018809214 (delta=-109.861942ms)
	I0717 10:37:53.200294    3636 fix.go:200] guest clock delta is within tolerance: -109.861942ms
	I0717 10:37:53.200298    3636 start.go:83] releasing machines lock for "ha-572000-m02", held for 13.44429115s
	I0717 10:37:53.200315    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:53.200436    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:37:53.222208    3636 out.go:177] * Found network options:
	I0717 10:37:53.243791    3636 out.go:177]   - NO_PROXY=192.169.0.5
	W0717 10:37:53.264601    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:37:53.264624    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:53.265081    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:53.265198    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:53.265269    3636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:37:53.265297    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	W0717 10:37:53.265332    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:37:53.265384    3636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:37:53.265387    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:53.265394    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:53.265518    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:53.265536    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:53.265639    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:53.265670    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:53.265728    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:37:53.265789    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:53.265871    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	W0717 10:37:53.294993    3636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:37:53.295059    3636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:37:53.339897    3636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:37:53.339919    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:37:53.340039    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:37:53.356231    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:37:53.365203    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:37:53.374127    3636 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:37:53.374184    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:37:53.382910    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:37:53.391778    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:37:53.400635    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:37:53.409795    3636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:37:53.418780    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:37:53.427594    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:37:53.436364    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:37:53.445437    3636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:37:53.453621    3636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:37:53.461634    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:53.558529    3636 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:37:53.577286    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:37:53.577360    3636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:37:53.591736    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:37:53.603521    3636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:37:53.618503    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:37:53.629064    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:37:53.639359    3636 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:37:53.658160    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:37:53.668814    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:37:53.683643    3636 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:37:53.686618    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:37:53.693926    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:37:53.707525    3636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:37:53.805691    3636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:37:53.920383    3636 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:37:53.920404    3636 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:37:53.934506    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:54.030259    3636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:37:56.344867    3636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.314525686s)
	I0717 10:37:56.344926    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:37:56.355390    3636 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 10:37:56.369820    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:37:56.380473    3636 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:37:56.479810    3636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:37:56.576860    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:56.671071    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:37:56.685037    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:37:56.696333    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:56.796692    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:37:56.861896    3636 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:37:56.861969    3636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:37:56.866672    3636 start.go:563] Will wait 60s for crictl version
	I0717 10:37:56.866724    3636 ssh_runner.go:195] Run: which crictl
	I0717 10:37:56.869877    3636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:37:56.896141    3636 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:37:56.896217    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:37:56.915592    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:37:56.953839    3636 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:37:56.975427    3636 out.go:177]   - env NO_PROXY=192.169.0.5
	I0717 10:37:56.996201    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:37:56.996608    3636 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:37:57.001171    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:37:57.011676    3636 mustload.go:65] Loading cluster: ha-572000
	I0717 10:37:57.011852    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:57.012113    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:57.012134    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:57.020969    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51980
	I0717 10:37:57.021367    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:57.021710    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:57.021724    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:57.021923    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:57.022051    3636 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:37:57.022138    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:57.022223    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3650
	I0717 10:37:57.023174    3636 host.go:66] Checking if "ha-572000" exists ...
	I0717 10:37:57.023426    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:57.023448    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:57.032019    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51982
	I0717 10:37:57.032378    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:57.032733    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:57.032749    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:57.032974    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:57.033082    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:57.033182    3636 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.6
	I0717 10:37:57.033189    3636 certs.go:194] generating shared ca certs ...
	I0717 10:37:57.033198    3636 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:37:57.033338    3636 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:37:57.033394    3636 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:37:57.033402    3636 certs.go:256] generating profile certs ...
	I0717 10:37:57.033489    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key
	I0717 10:37:57.033573    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.060f3240
	I0717 10:37:57.033624    3636 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key
	I0717 10:37:57.033631    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:37:57.033652    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:37:57.033672    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:37:57.033691    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:37:57.033708    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 10:37:57.033726    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 10:37:57.033744    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 10:37:57.033762    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 10:37:57.033840    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:37:57.033893    3636 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:37:57.033902    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:37:57.033938    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:37:57.033978    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:37:57.034008    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:37:57.034074    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:37:57.034108    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:37:57.034128    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:37:57.034146    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:57.034178    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:57.034270    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:57.034368    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:57.034458    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:57.034541    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:57.060171    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 10:37:57.063698    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 10:37:57.072274    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 10:37:57.075754    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0717 10:37:57.084043    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 10:37:57.087057    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 10:37:57.095232    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 10:37:57.098576    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0717 10:37:57.107451    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 10:37:57.110444    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 10:37:57.118613    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 10:37:57.121532    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 10:37:57.130217    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:37:57.149961    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:37:57.168914    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:37:57.188002    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:37:57.207206    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 10:37:57.226812    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 10:37:57.246070    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 10:37:57.265450    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 10:37:57.284420    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:37:57.303511    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:37:57.322687    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:37:57.341613    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 10:37:57.355190    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0717 10:37:57.368847    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 10:37:57.382513    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0717 10:37:57.395989    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 10:37:57.409357    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 10:37:57.423052    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 10:37:57.436932    3636 ssh_runner.go:195] Run: openssl version
	I0717 10:37:57.441057    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:37:57.450112    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:37:57.453386    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:37:57.453428    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:37:57.457514    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:37:57.466394    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:37:57.475362    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:37:57.478777    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:37:57.478819    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:37:57.482919    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:37:57.491931    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:37:57.500785    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:57.504034    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:57.504067    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:57.508244    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:37:57.517376    3636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:37:57.520713    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 10:37:57.524959    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 10:37:57.529259    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 10:37:57.533468    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 10:37:57.537834    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 10:37:57.542026    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 10:37:57.546248    3636 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.30.2 docker true true} ...
	I0717 10:37:57.546318    3636 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:37:57.546337    3636 kube-vip.go:115] generating kube-vip config ...
	I0717 10:37:57.546371    3636 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 10:37:57.559423    3636 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 10:37:57.559466    3636 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 10:37:57.559520    3636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:37:57.567774    3636 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:37:57.567817    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 10:37:57.575763    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0717 10:37:57.589137    3636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:37:57.602430    3636 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0717 10:37:57.616134    3636 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0717 10:37:57.619036    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:37:57.629004    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:57.726717    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:37:57.741206    3636 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:37:57.741389    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:57.762661    3636 out.go:177] * Verifying Kubernetes components...
	I0717 10:37:57.804314    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:57.930654    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:37:57.959022    3636 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:37:57.959251    3636 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa0a8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 10:37:57.959292    3636 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0717 10:37:57.959472    3636 node_ready.go:35] waiting up to 6m0s for node "ha-572000-m02" to be "Ready" ...
	I0717 10:37:57.959551    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:37:57.959557    3636 round_trippers.go:469] Request Headers:
	I0717 10:37:57.959564    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:37:57.959567    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.587526    3636 round_trippers.go:574] Response Status: 200 OK in 8627 milliseconds
	I0717 10:38:06.588080    3636 node_ready.go:49] node "ha-572000-m02" has status "Ready":"True"
	I0717 10:38:06.588093    3636 node_ready.go:38] duration metric: took 8.628386286s for node "ha-572000-m02" to be "Ready" ...
	I0717 10:38:06.588101    3636 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:38:06.588149    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:06.588155    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.588161    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.588168    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.624239    3636 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I0717 10:38:06.633134    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.633193    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:06.633198    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.633204    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.633210    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.642331    3636 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 10:38:06.642741    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:06.642749    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.642756    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.642759    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.645958    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:06.646753    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:06.646763    3636 pod_ready.go:81] duration metric: took 13.611341ms for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.646771    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.646808    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9dzd5
	I0717 10:38:06.646813    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.646818    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.646822    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.650165    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:06.650520    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:06.650527    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.650533    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.650538    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.652506    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:06.652830    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:06.652839    3636 pod_ready.go:81] duration metric: took 6.063342ms for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.652846    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.652883    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000
	I0717 10:38:06.652888    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.652894    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.652897    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.688343    3636 round_trippers.go:574] Response Status: 200 OK in 35 milliseconds
	I0717 10:38:06.688830    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:06.688842    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.688852    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.688855    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.691433    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:06.691756    3636 pod_ready.go:92] pod "etcd-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:06.691766    3636 pod_ready.go:81] duration metric: took 38.913354ms for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.691776    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.691822    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m02
	I0717 10:38:06.691828    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.691835    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.691841    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.722915    3636 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0717 10:38:06.723291    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:06.723298    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.723304    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.723309    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.762595    3636 round_trippers.go:574] Response Status: 200 OK in 39 milliseconds
	I0717 10:38:06.763038    3636 pod_ready.go:92] pod "etcd-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:06.763050    3636 pod_ready.go:81] duration metric: took 71.265447ms for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.763057    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.763098    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m03
	I0717 10:38:06.763103    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.763109    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.763112    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.766379    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:06.788728    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:06.788744    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.788750    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.788754    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.790975    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:06.791292    3636 pod_ready.go:92] pod "etcd-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:06.791302    3636 pod_ready.go:81] duration metric: took 28.239348ms for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.791319    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.988792    3636 request.go:629] Waited for 197.413405ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:38:06.988885    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:38:06.988891    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.988897    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.988903    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.991048    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:07.189095    3636 request.go:629] Waited for 197.524443ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:07.189132    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:07.189138    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:07.189146    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:07.189196    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:07.191472    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:07.191816    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:07.191825    3636 pod_ready.go:81] duration metric: took 400.490534ms for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:07.191832    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:07.388673    3636 request.go:629] Waited for 196.768491ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:38:07.388712    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:38:07.388717    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:07.388723    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:07.388726    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:07.390742    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:07.589477    3636 request.go:629] Waited for 198.180735ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:07.589514    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:07.589519    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:07.589526    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:07.589532    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:07.593904    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:38:07.594274    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:07.594283    3636 pod_ready.go:81] duration metric: took 402.436695ms for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:07.594290    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:07.789046    3636 request.go:629] Waited for 194.715768ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:38:07.789116    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:38:07.789122    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:07.789128    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:07.789134    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:07.791498    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:07.988262    3636 request.go:629] Waited for 196.319765ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:07.988331    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:07.988337    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:07.988344    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:07.988349    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:07.990665    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:07.990933    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:07.990943    3636 pod_ready.go:81] duration metric: took 396.637435ms for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:07.990949    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:08.189888    3636 request.go:629] Waited for 198.896315ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:38:08.189961    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:38:08.189968    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:08.189977    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:08.189982    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:08.192640    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:08.388942    3636 request.go:629] Waited for 195.85351ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:08.388998    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:08.389006    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:08.389019    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:08.389035    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:08.392574    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:08.392939    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:08.392951    3636 pod_ready.go:81] duration metric: took 401.985681ms for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:08.392963    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:08.589323    3636 request.go:629] Waited for 196.303012ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:38:08.589449    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:38:08.589461    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:08.589473    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:08.589481    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:08.592867    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:08.788589    3636 request.go:629] Waited for 195.011915ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:08.788634    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:08.788643    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:08.788654    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:08.788663    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:08.791468    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:08.791995    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:08.792019    3636 pod_ready.go:81] duration metric: took 399.039947ms for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:08.792032    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:08.990174    3636 request.go:629] Waited for 198.086662ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:38:08.990290    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:38:08.990300    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:08.990310    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:08.990317    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:08.993459    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:09.189555    3636 request.go:629] Waited for 195.556708ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:09.189686    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:09.189699    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:09.189710    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:09.189717    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:09.193157    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:09.193504    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:09.193518    3636 pod_ready.go:81] duration metric: took 401.469313ms for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:09.193543    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:09.389705    3636 request.go:629] Waited for 196.104363ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:38:09.389843    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:38:09.389855    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:09.389866    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:09.389872    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:09.393695    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:09.588443    3636 request.go:629] Waited for 194.213728ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:38:09.588571    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:38:09.588582    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:09.588591    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:09.588614    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:09.591794    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:09.592120    3636 pod_ready.go:92] pod "kube-proxy-5wcph" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:09.592130    3636 pod_ready.go:81] duration metric: took 398.566071ms for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:09.592136    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:09.789810    3636 request.go:629] Waited for 197.599858ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:38:09.789932    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:38:09.789953    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:09.789967    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:09.789977    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:09.793548    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:09.990128    3636 request.go:629] Waited for 195.990226ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:09.990259    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:09.990271    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:09.990282    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:09.990289    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:09.994401    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:38:09.995074    3636 pod_ready.go:92] pod "kube-proxy-h7k9z" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:09.995084    3636 pod_ready.go:81] duration metric: took 402.932164ms for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:09.995091    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:10.188412    3636 request.go:629] Waited for 193.228723ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:38:10.188460    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:38:10.188468    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:10.188479    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:10.188487    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:10.192053    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:10.389379    3636 request.go:629] Waited for 196.635202ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:10.389554    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:10.389574    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:10.389589    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:10.389599    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:10.393541    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:10.393889    3636 pod_ready.go:92] pod "kube-proxy-hst7h" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:10.393900    3636 pod_ready.go:81] duration metric: took 398.793558ms for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:10.393912    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:10.589752    3636 request.go:629] Waited for 195.757616ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:38:10.589810    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:38:10.589821    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:10.589833    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:10.589842    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:10.593161    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:10.789574    3636 request.go:629] Waited for 195.972483ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:10.789649    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:10.789655    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:10.789661    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:10.789665    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:10.792056    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:10.792456    3636 pod_ready.go:92] pod "kube-proxy-v6jxh" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:10.792465    3636 pod_ready.go:81] duration metric: took 398.537807ms for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:10.792472    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:10.990155    3636 request.go:629] Waited for 197.636631ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:38:10.990304    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:38:10.990316    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:10.990327    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:10.990333    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:10.993508    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:11.188937    3636 request.go:629] Waited for 194.57393ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:11.188967    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:11.188973    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:11.188979    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:11.188983    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:11.190738    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:11.191134    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:11.191144    3636 pod_ready.go:81] duration metric: took 398.656979ms for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:11.191150    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:11.388866    3636 request.go:629] Waited for 197.675969ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:38:11.388927    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:38:11.388931    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:11.388937    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:11.388941    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:11.390887    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:11.589661    3636 request.go:629] Waited for 198.35169ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:11.589745    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:11.589751    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:11.589759    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:11.589764    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:11.591880    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:11.592231    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:11.592240    3636 pod_ready.go:81] duration metric: took 401.075331ms for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:11.592247    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:11.790368    3636 request.go:629] Waited for 198.069219ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:38:11.790468    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:38:11.790479    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:11.790491    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:11.790498    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:11.793691    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:11.988391    3636 request.go:629] Waited for 194.130009ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:11.988514    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:11.988524    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:11.988535    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:11.988543    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:11.991587    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:11.991946    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:11.991960    3636 pod_ready.go:81] duration metric: took 399.692083ms for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:11.991969    3636 pod_ready.go:38] duration metric: took 5.403719656s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:38:11.991988    3636 api_server.go:52] waiting for apiserver process to appear ...
	I0717 10:38:11.992040    3636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:38:12.003855    3636 api_server.go:72] duration metric: took 14.26226374s to wait for apiserver process to appear ...
	I0717 10:38:12.003867    3636 api_server.go:88] waiting for apiserver healthz status ...
	I0717 10:38:12.003882    3636 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0717 10:38:12.008423    3636 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0717 10:38:12.008465    3636 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0717 10:38:12.008471    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:12.008478    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:12.008481    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:12.009101    3636 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:38:12.009162    3636 api_server.go:141] control plane version: v1.30.2
	I0717 10:38:12.009171    3636 api_server.go:131] duration metric: took 5.299116ms to wait for apiserver health ...
	I0717 10:38:12.009178    3636 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 10:38:12.189013    3636 request.go:629] Waited for 179.768156ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:12.189094    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:12.189102    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:12.189111    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:12.189116    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:12.194083    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:38:12.199463    3636 system_pods.go:59] 26 kube-system pods found
	I0717 10:38:12.199478    3636 system_pods.go:61] "coredns-7db6d8ff4d-2phrp" [e76a91cf-8a14-454c-baee-7c0128645e0e] Running
	I0717 10:38:12.199495    3636 system_pods.go:61] "coredns-7db6d8ff4d-9dzd5" [f9a7cff1-56a3-4600-ae2f-a951dda10753] Running
	I0717 10:38:12.199501    3636 system_pods.go:61] "etcd-ha-572000" [2d7b717e-d404-4c63-afe9-799de6711964] Running
	I0717 10:38:12.199505    3636 system_pods.go:61] "etcd-ha-572000-m02" [8abc7662-c159-4953-9aba-11a75b4e7d65] Running
	I0717 10:38:12.199509    3636 system_pods.go:61] "etcd-ha-572000-m03" [78629908-d362-4a27-933f-2b929867c22f] Running
	I0717 10:38:12.199518    3636 system_pods.go:61] "kindnet-5xsrp" [0b84aa4d-7452-47f6-8052-f063e2e8ef7b] Running
	I0717 10:38:12.199521    3636 system_pods.go:61] "kindnet-72zfp" [0cc735c4-08b3-499a-b9e2-8c38377f371b] Running
	I0717 10:38:12.199524    3636 system_pods.go:61] "kindnet-g2m92" [be3d84cf-f9e8-426c-b02a-49eb7eed9d6a] Running
	I0717 10:38:12.199526    3636 system_pods.go:61] "kindnet-t85bv" [8649cc70-8ce8-4caf-938e-bd253fa5b7ae] Running
	I0717 10:38:12.199530    3636 system_pods.go:61] "kube-apiserver-ha-572000" [6409591d-7a50-414b-be63-44d5ddb0b0e0] Running
	I0717 10:38:12.199532    3636 system_pods.go:61] "kube-apiserver-ha-572000-m02" [d658459c-67f9-4798-87f2-c651d830af35] Running
	I0717 10:38:12.199535    3636 system_pods.go:61] "kube-apiserver-ha-572000-m03" [ac1c02f1-f023-4ad2-99bc-2657fb9d50d7] Running
	I0717 10:38:12.199538    3636 system_pods.go:61] "kube-controller-manager-ha-572000" [075c4d23-04bd-404a-822d-ae7d326a68ac] Running
	I0717 10:38:12.199541    3636 system_pods.go:61] "kube-controller-manager-ha-572000-m02" [3014bdb3-bb86-48d5-a045-2a8dab026bd8] Running
	I0717 10:38:12.199544    3636 system_pods.go:61] "kube-controller-manager-ha-572000-m03" [3b76e220-3a5b-4dac-8f69-6b380799d2ef] Running
	I0717 10:38:12.199546    3636 system_pods.go:61] "kube-proxy-5wcph" [731f5b57-131e-4e97-b47a-036b8d4edbcd] Running
	I0717 10:38:12.199553    3636 system_pods.go:61] "kube-proxy-h7k9z" [cf687084-b40d-43b6-9476-8c60c5f37d1d] Running
	I0717 10:38:12.199557    3636 system_pods.go:61] "kube-proxy-hst7h" [2b7c8d2c-3d71-4357-9429-1d8438c446f5] Running
	I0717 10:38:12.199559    3636 system_pods.go:61] "kube-proxy-v6jxh" [3f952fc8-747f-49da-b400-5212dba538d8] Running
	I0717 10:38:12.199565    3636 system_pods.go:61] "kube-scheduler-ha-572000" [da36a422-ba49-4cd6-8f47-9be7c43657be] Running
	I0717 10:38:12.199568    3636 system_pods.go:61] "kube-scheduler-ha-572000-m02" [13e289a2-8d3c-478f-9220-1d114dd5bf62] Running
	I0717 10:38:12.199571    3636 system_pods.go:61] "kube-scheduler-ha-572000-m03" [428716db-8096-4b57-9f90-e706d5683852] Running
	I0717 10:38:12.199573    3636 system_pods.go:61] "kube-vip-ha-572000" [289bb6df-c101-49d3-9e4a-6c0ecdd26551] Running
	I0717 10:38:12.199576    3636 system_pods.go:61] "kube-vip-ha-572000-m02" [fa57b8d2-bc41-42af-9d9a-1b68f882e6fe] Running
	I0717 10:38:12.199579    3636 system_pods.go:61] "kube-vip-ha-572000-m03" [a35c5007-dfe3-4b58-b16c-d568b8f9c1c2] Running
	I0717 10:38:12.199581    3636 system_pods.go:61] "storage-provisioner" [1801216d-6529-4049-b874-1577132fc03f] Running
	I0717 10:38:12.199585    3636 system_pods.go:74] duration metric: took 190.398086ms to wait for pod list to return data ...
	I0717 10:38:12.199592    3636 default_sa.go:34] waiting for default service account to be created ...
	I0717 10:38:12.388401    3636 request.go:629] Waited for 188.727547ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0717 10:38:12.388434    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0717 10:38:12.388439    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:12.388445    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:12.388449    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:12.390736    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:12.390877    3636 default_sa.go:45] found service account: "default"
	I0717 10:38:12.390886    3636 default_sa.go:55] duration metric: took 191.284842ms for default service account to be created ...
	I0717 10:38:12.390892    3636 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 10:38:12.588992    3636 request.go:629] Waited for 198.054942ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:12.589092    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:12.589101    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:12.589115    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:12.589123    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:12.595003    3636 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 10:38:12.599941    3636 system_pods.go:86] 26 kube-system pods found
	I0717 10:38:12.599953    3636 system_pods.go:89] "coredns-7db6d8ff4d-2phrp" [e76a91cf-8a14-454c-baee-7c0128645e0e] Running
	I0717 10:38:12.599962    3636 system_pods.go:89] "coredns-7db6d8ff4d-9dzd5" [f9a7cff1-56a3-4600-ae2f-a951dda10753] Running
	I0717 10:38:12.599966    3636 system_pods.go:89] "etcd-ha-572000" [2d7b717e-d404-4c63-afe9-799de6711964] Running
	I0717 10:38:12.599970    3636 system_pods.go:89] "etcd-ha-572000-m02" [8abc7662-c159-4953-9aba-11a75b4e7d65] Running
	I0717 10:38:12.599986    3636 system_pods.go:89] "etcd-ha-572000-m03" [78629908-d362-4a27-933f-2b929867c22f] Running
	I0717 10:38:12.599992    3636 system_pods.go:89] "kindnet-5xsrp" [0b84aa4d-7452-47f6-8052-f063e2e8ef7b] Running
	I0717 10:38:12.599996    3636 system_pods.go:89] "kindnet-72zfp" [0cc735c4-08b3-499a-b9e2-8c38377f371b] Running
	I0717 10:38:12.599999    3636 system_pods.go:89] "kindnet-g2m92" [be3d84cf-f9e8-426c-b02a-49eb7eed9d6a] Running
	I0717 10:38:12.600003    3636 system_pods.go:89] "kindnet-t85bv" [8649cc70-8ce8-4caf-938e-bd253fa5b7ae] Running
	I0717 10:38:12.600007    3636 system_pods.go:89] "kube-apiserver-ha-572000" [6409591d-7a50-414b-be63-44d5ddb0b0e0] Running
	I0717 10:38:12.600010    3636 system_pods.go:89] "kube-apiserver-ha-572000-m02" [d658459c-67f9-4798-87f2-c651d830af35] Running
	I0717 10:38:12.600014    3636 system_pods.go:89] "kube-apiserver-ha-572000-m03" [ac1c02f1-f023-4ad2-99bc-2657fb9d50d7] Running
	I0717 10:38:12.600018    3636 system_pods.go:89] "kube-controller-manager-ha-572000" [075c4d23-04bd-404a-822d-ae7d326a68ac] Running
	I0717 10:38:12.600021    3636 system_pods.go:89] "kube-controller-manager-ha-572000-m02" [3014bdb3-bb86-48d5-a045-2a8dab026bd8] Running
	I0717 10:38:12.600024    3636 system_pods.go:89] "kube-controller-manager-ha-572000-m03" [3b76e220-3a5b-4dac-8f69-6b380799d2ef] Running
	I0717 10:38:12.600028    3636 system_pods.go:89] "kube-proxy-5wcph" [731f5b57-131e-4e97-b47a-036b8d4edbcd] Running
	I0717 10:38:12.600031    3636 system_pods.go:89] "kube-proxy-h7k9z" [cf687084-b40d-43b6-9476-8c60c5f37d1d] Running
	I0717 10:38:12.600035    3636 system_pods.go:89] "kube-proxy-hst7h" [2b7c8d2c-3d71-4357-9429-1d8438c446f5] Running
	I0717 10:38:12.600038    3636 system_pods.go:89] "kube-proxy-v6jxh" [3f952fc8-747f-49da-b400-5212dba538d8] Running
	I0717 10:38:12.600041    3636 system_pods.go:89] "kube-scheduler-ha-572000" [da36a422-ba49-4cd6-8f47-9be7c43657be] Running
	I0717 10:38:12.600044    3636 system_pods.go:89] "kube-scheduler-ha-572000-m02" [13e289a2-8d3c-478f-9220-1d114dd5bf62] Running
	I0717 10:38:12.600048    3636 system_pods.go:89] "kube-scheduler-ha-572000-m03" [428716db-8096-4b57-9f90-e706d5683852] Running
	I0717 10:38:12.600051    3636 system_pods.go:89] "kube-vip-ha-572000" [289bb6df-c101-49d3-9e4a-6c0ecdd26551] Running
	I0717 10:38:12.600054    3636 system_pods.go:89] "kube-vip-ha-572000-m02" [fa57b8d2-bc41-42af-9d9a-1b68f882e6fe] Running
	I0717 10:38:12.600058    3636 system_pods.go:89] "kube-vip-ha-572000-m03" [a35c5007-dfe3-4b58-b16c-d568b8f9c1c2] Running
	I0717 10:38:12.600061    3636 system_pods.go:89] "storage-provisioner" [1801216d-6529-4049-b874-1577132fc03f] Running
	I0717 10:38:12.600065    3636 system_pods.go:126] duration metric: took 209.164597ms to wait for k8s-apps to be running ...
	I0717 10:38:12.600076    3636 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 10:38:12.600137    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:38:12.610524    3636 system_svc.go:56] duration metric: took 10.448568ms WaitForService to wait for kubelet
	I0717 10:38:12.610538    3636 kubeadm.go:582] duration metric: took 14.868933199s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:38:12.610564    3636 node_conditions.go:102] verifying NodePressure condition ...
	I0717 10:38:12.789306    3636 request.go:629] Waited for 178.678322ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0717 10:38:12.789427    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0717 10:38:12.789438    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:12.789448    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:12.789457    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:12.793007    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:12.794084    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:38:12.794097    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:38:12.794107    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:38:12.794110    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:38:12.794114    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:38:12.794122    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:38:12.794126    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:38:12.794129    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:38:12.794133    3636 node_conditions.go:105] duration metric: took 183.560156ms to run NodePressure ...
	I0717 10:38:12.794140    3636 start.go:241] waiting for startup goroutines ...
	I0717 10:38:12.794158    3636 start.go:255] writing updated cluster config ...
	I0717 10:38:12.815984    3636 out.go:177] 
	I0717 10:38:12.836616    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:38:12.836683    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:38:12.857448    3636 out.go:177] * Starting "ha-572000-m03" control-plane node in "ha-572000" cluster
	I0717 10:38:12.899463    3636 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:38:12.899506    3636 cache.go:56] Caching tarball of preloaded images
	I0717 10:38:12.899666    3636 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:38:12.899684    3636 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:38:12.899813    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:38:12.900669    3636 start.go:360] acquireMachinesLock for ha-572000-m03: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:38:12.900765    3636 start.go:364] duration metric: took 73.243µs to acquireMachinesLock for "ha-572000-m03"
	I0717 10:38:12.900790    3636 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:38:12.900816    3636 fix.go:54] fixHost starting: m03
	I0717 10:38:12.901158    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:38:12.901182    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:38:12.910100    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51987
	I0717 10:38:12.910428    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:38:12.910808    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:38:12.910824    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:38:12.911027    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:38:12.911151    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:12.911236    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetState
	I0717 10:38:12.911315    3636 main.go:141] libmachine: (ha-572000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:38:12.911405    3636 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid from json: 2972
	I0717 10:38:12.912336    3636 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid 2972 missing from process table
	I0717 10:38:12.912361    3636 fix.go:112] recreateIfNeeded on ha-572000-m03: state=Stopped err=<nil>
	I0717 10:38:12.912369    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	W0717 10:38:12.912452    3636 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:38:12.933536    3636 out.go:177] * Restarting existing hyperkit VM for "ha-572000-m03" ...
	I0717 10:38:12.975448    3636 main.go:141] libmachine: (ha-572000-m03) Calling .Start
	I0717 10:38:12.975666    3636 main.go:141] libmachine: (ha-572000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:38:12.975716    3636 main.go:141] libmachine: (ha-572000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/hyperkit.pid
	I0717 10:38:12.977484    3636 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid 2972 missing from process table
	I0717 10:38:12.977496    3636 main.go:141] libmachine: (ha-572000-m03) DBG | pid 2972 is in state "Stopped"
	I0717 10:38:12.977512    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/hyperkit.pid...
	I0717 10:38:12.977862    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Using UUID 5064fb5d-6e32-4be4-8d75-15b09204e5f5
	I0717 10:38:13.005572    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Generated MAC 6e:d3:62:da:43:cf
	I0717 10:38:13.005591    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:38:13.005736    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5064fb5d-6e32-4be4-8d75-15b09204e5f5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acc00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:38:13.005764    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5064fb5d-6e32-4be4-8d75-15b09204e5f5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acc00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:38:13.005828    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5064fb5d-6e32-4be4-8d75-15b09204e5f5", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/ha-572000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machine
s/ha-572000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:38:13.005888    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5064fb5d-6e32-4be4-8d75-15b09204e5f5 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/ha-572000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:38:13.005909    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:38:13.007252    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: Pid is 3665
	I0717 10:38:13.007703    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Attempt 0
	I0717 10:38:13.007718    3636 main.go:141] libmachine: (ha-572000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:38:13.007809    3636 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid from json: 3665
	I0717 10:38:13.009827    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Searching for 6e:d3:62:da:43:cf in /var/db/dhcpd_leases ...
	I0717 10:38:13.009874    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:38:13.009921    3636 main.go:141] libmachine: (ha-572000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x669952ec}
	I0717 10:38:13.009945    3636 main.go:141] libmachine: (ha-572000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669952da}
	I0717 10:38:13.009959    3636 main.go:141] libmachine: (ha-572000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:37:45:6a:f1:7f ID:1,1e:37:45:6a:f1:7f Lease:0x6698001b}
	I0717 10:38:13.009965    3636 main.go:141] libmachine: (ha-572000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:d3:62:da:43:cf ID:1,6e:d3:62:da:43:cf Lease:0x669950e4}
	I0717 10:38:13.009979    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetConfigRaw
	I0717 10:38:13.009982    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Found match: 6e:d3:62:da:43:cf
	I0717 10:38:13.009992    3636 main.go:141] libmachine: (ha-572000-m03) DBG | IP: 192.169.0.7
	I0717 10:38:13.010657    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetIP
	I0717 10:38:13.010834    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:38:13.011336    3636 machine.go:94] provisionDockerMachine start ...
	I0717 10:38:13.011346    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:13.011471    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:13.011562    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:13.011675    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:13.011768    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:13.011883    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:13.012034    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:13.012203    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:13.012211    3636 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:38:13.014976    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:38:13.023104    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:38:13.024110    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:38:13.024135    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:38:13.024157    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:38:13.024175    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:38:13.404157    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:38:13.404173    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:38:13.519656    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:38:13.519690    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:38:13.519727    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:38:13.519751    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:38:13.520524    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:38:13.520534    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:38:18.810258    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:18 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0717 10:38:18.810297    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:18 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0717 10:38:18.810307    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:18 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0717 10:38:18.834790    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:18 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0717 10:38:24.076646    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:38:24.076665    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetMachineName
	I0717 10:38:24.076790    3636 buildroot.go:166] provisioning hostname "ha-572000-m03"
	I0717 10:38:24.076802    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetMachineName
	I0717 10:38:24.076886    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.077024    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.077111    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.077196    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.077278    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.077404    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:24.077556    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:24.077565    3636 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000-m03 && echo "ha-572000-m03" | sudo tee /etc/hostname
	I0717 10:38:24.142857    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000-m03
	
	I0717 10:38:24.142872    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.143001    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.143104    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.143196    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.143280    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.143395    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:24.143539    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:24.143551    3636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:38:24.203331    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:38:24.203349    3636 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:38:24.203359    3636 buildroot.go:174] setting up certificates
	I0717 10:38:24.203364    3636 provision.go:84] configureAuth start
	I0717 10:38:24.203370    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetMachineName
	I0717 10:38:24.203518    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetIP
	I0717 10:38:24.203623    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.203721    3636 provision.go:143] copyHostCerts
	I0717 10:38:24.203751    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:38:24.203800    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:38:24.203806    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:38:24.203931    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:38:24.204144    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:38:24.204174    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:38:24.204179    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:38:24.204294    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:38:24.204463    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:38:24.204496    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:38:24.204500    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:38:24.204570    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:38:24.204726    3636 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000-m03 san=[127.0.0.1 192.169.0.7 ha-572000-m03 localhost minikube]
	I0717 10:38:24.389534    3636 provision.go:177] copyRemoteCerts
	I0717 10:38:24.389582    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:38:24.389597    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.389749    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.389840    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.389936    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.390018    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	I0717 10:38:24.424587    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:38:24.424660    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:38:24.444455    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:38:24.444522    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 10:38:24.465006    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:38:24.465071    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:38:24.485065    3636 provision.go:87] duration metric: took 281.685984ms to configureAuth
	I0717 10:38:24.485079    3636 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:38:24.485254    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:38:24.485268    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:24.485399    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.485509    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.485606    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.485695    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.485780    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.485889    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:24.486018    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:24.486026    3636 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:38:24.539772    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:38:24.539786    3636 buildroot.go:70] root file system type: tmpfs
	I0717 10:38:24.539874    3636 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:38:24.539885    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.540019    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.540102    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.540205    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.540313    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.540462    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:24.540607    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:24.540655    3636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:38:24.605074    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:38:24.605091    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.605230    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.605339    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.605424    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.605494    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.605620    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:24.605771    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:24.605784    3636 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:38:26.231394    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:38:26.231416    3636 machine.go:97] duration metric: took 13.21973714s to provisionDockerMachine
	I0717 10:38:26.231428    3636 start.go:293] postStartSetup for "ha-572000-m03" (driver="hyperkit")
	I0717 10:38:26.231437    3636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:38:26.231448    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.231633    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:38:26.231652    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:26.231764    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:26.231872    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.231959    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:26.232054    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	I0717 10:38:26.266647    3636 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:38:26.269791    3636 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:38:26.269801    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:38:26.269897    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:38:26.270060    3636 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:38:26.270067    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:38:26.270227    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:38:26.278127    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:38:26.297704    3636 start.go:296] duration metric: took 66.264765ms for postStartSetup
	I0717 10:38:26.297725    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.297894    3636 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:38:26.297906    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:26.297982    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:26.298095    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.298185    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:26.298259    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	I0717 10:38:26.332566    3636 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:38:26.332629    3636 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:38:26.364567    3636 fix.go:56] duration metric: took 13.463410955s for fixHost
	I0717 10:38:26.364593    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:26.364774    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:26.364878    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.364991    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.365075    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:26.365213    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:26.365360    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:26.365368    3636 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:38:26.420992    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237906.507932482
	
	I0717 10:38:26.421006    3636 fix.go:216] guest clock: 1721237906.507932482
	I0717 10:38:26.421017    3636 fix.go:229] Guest: 2024-07-17 10:38:26.507932482 -0700 PDT Remote: 2024-07-17 10:38:26.364583 -0700 PDT m=+65.237237021 (delta=143.349482ms)
	I0717 10:38:26.421032    3636 fix.go:200] guest clock delta is within tolerance: 143.349482ms
	I0717 10:38:26.421036    3636 start.go:83] releasing machines lock for "ha-572000-m03", held for 13.519917261s
	I0717 10:38:26.421054    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.421181    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetIP
	I0717 10:38:26.443010    3636 out.go:177] * Found network options:
	I0717 10:38:26.464409    3636 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0717 10:38:26.487460    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:38:26.487486    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:38:26.487503    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.488209    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.488434    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.488546    3636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:38:26.488583    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	W0717 10:38:26.488701    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:38:26.488736    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:38:26.488809    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:26.488843    3636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:38:26.488855    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:26.489040    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.489074    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:26.489211    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:26.489222    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.489320    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	I0717 10:38:26.489386    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:26.489533    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	W0717 10:38:26.520778    3636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:38:26.520842    3636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:38:26.572109    3636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:38:26.572138    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:38:26.572238    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:38:26.587958    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:38:26.596058    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:38:26.604066    3636 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:38:26.604116    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:38:26.612485    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:38:26.620942    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:38:26.629083    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:38:26.637275    3636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:38:26.645515    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:38:26.653717    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:38:26.662055    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:38:26.670484    3636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:38:26.677700    3636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:38:26.684962    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:26.781787    3636 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:38:26.802958    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:38:26.803029    3636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:38:26.827692    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:38:26.840860    3636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:38:26.869195    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:38:26.881705    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:38:26.892987    3636 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:38:26.911733    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:38:26.922817    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:38:26.938911    3636 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:38:26.941995    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:38:26.951587    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:38:26.965318    3636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:38:27.062809    3636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:38:27.181748    3636 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:38:27.181774    3636 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:38:27.195694    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:27.293396    3636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:38:29.632743    3636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.339268733s)
	I0717 10:38:29.632812    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:38:29.643610    3636 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 10:38:29.657480    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:38:29.668578    3636 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:38:29.772887    3636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:38:29.887343    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:29.983127    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:38:29.998340    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:38:30.010843    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:30.124553    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:38:30.193605    3636 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:38:30.193684    3636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:38:30.198773    3636 start.go:563] Will wait 60s for crictl version
	I0717 10:38:30.198857    3636 ssh_runner.go:195] Run: which crictl
	I0717 10:38:30.202846    3636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:38:30.233816    3636 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:38:30.233915    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:38:30.253337    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:38:30.311688    3636 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:38:30.384020    3636 out.go:177]   - env NO_PROXY=192.169.0.5
	I0717 10:38:30.444054    3636 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0717 10:38:30.480967    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetIP
	I0717 10:38:30.481248    3636 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:38:30.485047    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:38:30.495793    3636 mustload.go:65] Loading cluster: ha-572000
	I0717 10:38:30.495976    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:38:30.496198    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:38:30.496221    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:38:30.505198    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52009
	I0717 10:38:30.505558    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:38:30.505932    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:38:30.505942    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:38:30.506222    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:38:30.506342    3636 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:38:30.506437    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:38:30.506526    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3650
	I0717 10:38:30.507493    3636 host.go:66] Checking if "ha-572000" exists ...
	I0717 10:38:30.507764    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:38:30.507798    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:38:30.516606    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52011
	I0717 10:38:30.516943    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:38:30.517270    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:38:30.517281    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:38:30.517513    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:38:30.517630    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:38:30.517732    3636 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.7
	I0717 10:38:30.517737    3636 certs.go:194] generating shared ca certs ...
	I0717 10:38:30.517751    3636 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:38:30.517912    3636 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:38:30.517964    3636 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:38:30.517973    3636 certs.go:256] generating profile certs ...
	I0717 10:38:30.518074    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key
	I0717 10:38:30.518169    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.562e5459
	I0717 10:38:30.518222    3636 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key
	I0717 10:38:30.518229    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:38:30.518253    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:38:30.518273    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:38:30.518296    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:38:30.518321    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 10:38:30.518340    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 10:38:30.518358    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 10:38:30.518375    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 10:38:30.518476    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:38:30.518520    3636 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:38:30.518529    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:38:30.518566    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:38:30.518602    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:38:30.518634    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:38:30.518702    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:38:30.518736    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:38:30.518764    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:38:30.518783    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:38:30.518808    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:38:30.518899    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:38:30.518987    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:38:30.519076    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:38:30.519152    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:38:30.544343    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 10:38:30.547913    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 10:38:30.557636    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 10:38:30.561333    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0717 10:38:30.570252    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 10:38:30.573631    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 10:38:30.582360    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 10:38:30.585629    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0717 10:38:30.593318    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 10:38:30.596412    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 10:38:30.604690    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 10:38:30.607967    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 10:38:30.616462    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:38:30.638619    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:38:30.660075    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:38:30.679834    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:38:30.699712    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 10:38:30.720095    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 10:38:30.740379    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 10:38:30.760837    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 10:38:30.780662    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:38:30.800982    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:38:30.821007    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:38:30.841019    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 10:38:30.855040    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0717 10:38:30.868897    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 10:38:30.882296    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0717 10:38:30.895884    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 10:38:30.909514    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 10:38:30.923253    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 10:38:30.937006    3636 ssh_runner.go:195] Run: openssl version
	I0717 10:38:30.941436    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:38:30.950257    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:38:30.955139    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:38:30.955192    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:38:30.959572    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:38:30.968160    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:38:30.976579    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:38:30.980025    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:38:30.980067    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:38:30.984288    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:38:30.992609    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:38:31.001221    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:38:31.004796    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:38:31.004841    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:38:31.009065    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:38:31.017464    3636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:38:31.021030    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 10:38:31.025586    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 10:38:31.029983    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 10:38:31.034293    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 10:38:31.038625    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 10:38:31.042961    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 10:38:31.047275    3636 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.30.2 docker true true} ...
	I0717 10:38:31.047334    3636 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:38:31.047351    3636 kube-vip.go:115] generating kube-vip config ...
	I0717 10:38:31.047388    3636 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 10:38:31.059333    3636 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 10:38:31.059386    3636 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 10:38:31.059445    3636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:38:31.067249    3636 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:38:31.067300    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 10:38:31.075304    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0717 10:38:31.088747    3636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:38:31.102087    3636 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0717 10:38:31.115605    3636 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0717 10:38:31.118396    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:38:31.128499    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:31.224486    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:38:31.238639    3636 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:38:31.238848    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:38:31.259920    3636 out.go:177] * Verifying Kubernetes components...
	I0717 10:38:31.280661    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:31.399137    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:38:31.415018    3636 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:38:31.415346    3636 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa0a8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 10:38:31.415404    3636 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0717 10:38:31.415666    3636 node_ready.go:35] waiting up to 6m0s for node "ha-572000-m03" to be "Ready" ...
	I0717 10:38:31.415725    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:31.415732    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.415740    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.415745    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.421957    3636 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 10:38:31.422260    3636 node_ready.go:49] node "ha-572000-m03" has status "Ready":"True"
	I0717 10:38:31.422274    3636 node_ready.go:38] duration metric: took 6.596243ms for node "ha-572000-m03" to be "Ready" ...
	I0717 10:38:31.422281    3636 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:38:31.422331    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:31.422337    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.422343    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.422347    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.431073    3636 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 10:38:31.436681    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:31.436766    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:31.436772    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.436778    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.436782    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.440248    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:31.440722    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:31.440730    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.440735    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.440738    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.442939    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:31.937618    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:31.937636    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.937668    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.937673    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.940388    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:31.940820    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:31.940828    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.940834    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.940838    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.943159    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:32.437866    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:32.437879    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:32.437885    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:32.437888    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:32.446284    3636 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 10:38:32.446927    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:32.446936    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:32.446943    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:32.446948    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:32.452237    3636 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 10:38:32.937878    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:32.937890    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:32.937896    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:32.937901    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:32.940439    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:32.941049    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:32.941057    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:32.941064    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:32.941080    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:32.943760    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:33.437735    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:33.437751    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:33.437757    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:33.437760    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:33.440741    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:33.441277    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:33.441285    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:33.441291    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:33.441302    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:33.443897    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:33.444546    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:33.938758    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:33.938781    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:33.938787    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:33.938791    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:33.941068    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:33.941437    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:33.941445    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:33.941451    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:33.941462    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:33.943283    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:34.437334    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:34.437347    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:34.437354    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:34.437357    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:34.440066    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:34.440546    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:34.440554    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:34.440560    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:34.440563    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:34.442659    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:34.938574    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:34.938586    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:34.938593    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:34.938602    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:34.941243    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:34.941810    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:34.941818    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:34.941824    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:34.941827    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:34.943881    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:35.437928    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:35.437948    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:35.437959    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:35.437965    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:35.441416    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:35.441923    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:35.441931    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:35.441937    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:35.441941    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:35.443781    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:35.937111    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:35.937132    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:35.937144    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:35.937149    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:35.941097    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:35.941689    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:35.941702    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:35.941708    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:35.941711    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:35.943483    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:35.943912    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:36.437284    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:36.437298    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:36.437304    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:36.437308    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:36.439570    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:36.440110    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:36.440117    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:36.440127    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:36.440130    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:36.441781    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:36.938251    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:36.938279    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:36.938357    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:36.938372    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:36.941451    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:36.942095    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:36.942103    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:36.942109    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:36.942112    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:36.943809    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:37.438234    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:37.438246    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:37.438251    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:37.438256    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:37.440243    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:37.440651    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:37.440658    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:37.440664    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:37.440674    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:37.442390    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:37.938519    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:37.938538    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:37.938588    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:37.938592    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:37.940708    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:37.941242    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:37.941250    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:37.941256    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:37.941260    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:37.942969    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:38.437210    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:38.437229    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:38.437263    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:38.437275    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:38.440621    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:38.441113    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:38.441120    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:38.441126    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:38.441130    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:38.444813    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:38.445187    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:38.937338    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:38.937354    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:38.937363    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:38.937368    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:38.939598    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:38.940020    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:38.940027    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:38.940033    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:38.940038    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:38.941562    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:39.437538    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:39.437553    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:39.437563    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:39.437566    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:39.439993    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:39.440392    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:39.440400    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:39.440405    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:39.440408    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:39.442187    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:39.938827    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:39.938859    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:39.938867    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:39.938871    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:39.941007    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:39.941470    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:39.941477    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:39.941482    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:39.941486    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:39.943155    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:40.437526    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:40.437540    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:40.437546    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:40.437550    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:40.439587    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:40.440056    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:40.440063    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:40.440068    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:40.440072    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:40.441961    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:40.937672    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:40.937688    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:40.937697    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:40.937701    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:40.940217    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:40.940568    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:40.940576    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:40.940581    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:40.940585    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:40.942351    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:40.942718    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:41.437331    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:41.437344    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:41.437350    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:41.437354    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:41.439766    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:41.440280    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:41.440287    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:41.440293    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:41.440296    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:41.441965    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:41.938758    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:41.938778    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:41.938790    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:41.938798    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:41.941589    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:41.942137    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:41.942146    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:41.942152    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:41.942157    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:41.943723    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:42.438172    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:42.438185    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:42.438194    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:42.438198    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:42.440429    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:42.440980    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:42.440988    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:42.440994    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:42.440998    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:42.442893    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:42.938134    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:42.938172    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:42.938183    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:42.938191    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:42.940744    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:42.941114    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:42.941122    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:42.941127    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:42.941131    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:42.942787    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:42.943905    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:43.438163    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:43.438195    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:43.438217    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:43.438224    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:43.440858    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:43.441272    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:43.441279    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:43.441288    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:43.441292    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:43.443069    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:43.937578    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:43.937589    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:43.937596    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:43.937599    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:43.939582    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:43.940136    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:43.940144    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:43.940150    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:43.940152    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:43.941646    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:44.437231    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:44.437244    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:44.437250    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:44.437254    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:44.439651    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:44.440190    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:44.440197    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:44.440202    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:44.440206    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:44.442158    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:44.937185    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:44.937196    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:44.937203    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:44.937206    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:44.939361    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:44.939788    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:44.939796    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:44.939802    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:44.939805    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:44.941482    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:45.437377    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:45.437392    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:45.437401    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:45.437406    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:45.439768    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:45.440303    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:45.440311    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:45.440317    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:45.440320    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:45.441925    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:45.442312    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:45.939181    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:45.939236    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:45.939246    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:45.939253    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:45.941938    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:45.942549    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:45.942557    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:45.942563    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:45.942566    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:45.944281    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:46.437228    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:46.437238    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:46.437245    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:46.437248    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:46.439099    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:46.439744    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:46.439751    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:46.439757    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:46.439760    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:46.441200    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:46.938133    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:46.938186    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:46.938196    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:46.938202    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:46.940467    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:46.940876    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:46.940884    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:46.940890    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:46.940893    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:46.942527    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:47.437838    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:47.437850    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:47.437857    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:47.437861    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:47.440152    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:47.440651    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:47.440660    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:47.440665    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:47.440669    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:47.442745    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:47.443107    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:47.937851    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:47.937867    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:47.937873    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:47.937876    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:47.940047    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:47.940510    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:47.940517    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:47.940523    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:47.940530    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:47.942242    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:48.439255    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:48.439310    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:48.439329    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:48.439338    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:48.442468    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:48.443256    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:48.443264    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:48.443269    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:48.443272    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:48.444868    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:48.937733    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:48.937744    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:48.937750    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:48.937753    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:48.939731    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:48.940190    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:48.940198    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:48.940204    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:48.940207    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:48.941747    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:49.438149    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:49.438169    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:49.438181    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:49.438190    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:49.441135    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:49.441712    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:49.441721    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:49.441726    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:49.441738    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:49.443421    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:49.443800    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:49.937835    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:49.937887    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:49.937895    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:49.937905    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:49.940121    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:49.940667    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:49.940674    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:49.940680    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:49.940698    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:49.942630    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:50.438458    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:50.438469    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:50.438476    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:50.438483    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:50.440697    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:50.441412    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:50.441420    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:50.441426    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:50.441430    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:50.443161    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:50.937976    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:50.937995    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:50.938003    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:50.938009    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:50.940796    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:50.941307    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:50.941315    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:50.941320    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:50.941323    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:50.943029    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:51.437692    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:51.437705    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:51.437714    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:51.437720    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:51.440262    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:51.440918    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:51.440926    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:51.440932    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:51.440936    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:51.442631    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:51.937774    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:51.937792    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:51.937801    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:51.937807    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:51.940276    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:51.940668    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:51.940675    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:51.940681    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:51.940685    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:51.942296    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:51.942616    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:52.438854    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:52.438878    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:52.438892    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:52.438900    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:52.442008    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:52.442522    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:52.442530    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:52.442536    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:52.442540    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:52.444262    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:52.937664    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:52.937675    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:52.937684    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:52.937687    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:52.939825    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:52.940415    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:52.940422    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:52.940428    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:52.940432    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:52.942064    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:53.439277    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:53.439300    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:53.439309    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:53.439315    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:53.441705    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:53.442130    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:53.442138    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:53.442143    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:53.442146    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:53.443926    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:53.938741    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:53.938755    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:53.938785    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:53.938790    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:53.941015    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:53.941672    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:53.941680    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:53.941685    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:53.941689    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:53.943953    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:53.944413    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:54.438636    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:54.438654    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:54.438663    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:54.438668    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:54.441262    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:54.441677    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:54.441684    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:54.441690    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:54.441693    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:54.443309    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:54.938770    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:54.938788    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:54.938798    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:54.938802    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:54.941486    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:54.941877    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:54.941884    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:54.941890    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:54.941893    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:54.943590    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:55.438030    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:55.438049    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:55.438059    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:55.438064    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:55.440706    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:55.441272    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:55.441280    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:55.441289    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:55.441292    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:55.443295    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:55.938147    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:55.938203    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:55.938215    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:55.938222    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:55.940270    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:55.940729    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:55.940737    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:55.940742    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:55.940745    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:55.942359    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:56.437637    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:56.437654    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:56.437666    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:56.437671    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:56.440401    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:56.440900    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:56.440909    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:56.440916    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:56.440920    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:56.442737    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:56.443083    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:56.938496    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:56.938521    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:56.938533    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:56.938541    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:56.941967    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:56.942683    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:56.942691    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:56.942697    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:56.942707    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:56.944542    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:57.438317    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:57.438392    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:57.438405    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:57.438411    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:57.441323    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:57.441768    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:57.441776    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:57.441780    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:57.441793    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:57.443513    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:57.937977    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:57.937990    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:57.937996    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:57.938000    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:57.940155    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:57.940631    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:57.940639    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:57.940645    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:57.940650    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:57.942518    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:58.438589    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:58.438606    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:58.438612    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:58.438615    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:58.440808    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:58.441401    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:58.441409    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:58.441415    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:58.441423    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:58.443141    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:58.443478    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:58.938651    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:58.938670    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:58.938679    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:58.938683    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:58.940981    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:58.941414    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:58.941422    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:58.941428    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:58.941431    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:58.943207    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:59.437795    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:59.437809    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:59.437815    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:59.437819    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:59.440022    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:59.440439    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:59.440446    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:59.440452    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:59.440457    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:59.442209    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:59.938380    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:59.938393    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:59.938400    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:59.938403    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:59.940648    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:59.941030    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:59.941038    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:59.941044    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:59.941048    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:59.942631    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:00.437586    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:00.437607    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:00.437616    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:00.437621    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:00.440082    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:00.440574    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:00.440582    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:00.440588    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:00.440591    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:00.442224    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:00.939171    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:00.939189    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:00.939198    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:00.939203    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:00.941658    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:00.942057    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:00.942065    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:00.942071    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:00.942075    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:00.943872    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:00.944304    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:01.438420    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:01.438444    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:01.438462    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:01.438475    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:01.441885    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:01.442448    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:01.442456    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:01.442462    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:01.442473    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:01.444325    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:01.937741    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:01.937759    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:01.937769    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:01.937774    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:01.941004    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:01.941638    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:01.941645    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:01.941651    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:01.941655    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:01.943421    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:02.439464    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:02.439515    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:02.439539    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:02.439547    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:02.442788    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:02.443568    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:02.443575    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:02.443581    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:02.443584    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:02.445070    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:02.939355    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:02.939398    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:02.939423    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:02.939432    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:02.943288    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:02.943786    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:02.943793    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:02.943798    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:02.943808    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:02.945549    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:02.945918    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:03.437814    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:03.437833    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:03.437846    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:03.437852    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:03.440696    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:03.441473    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:03.441481    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:03.441487    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:03.441494    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:03.443180    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:03.938154    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:03.938171    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:03.938179    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:03.938185    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:03.940749    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:03.941323    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:03.941330    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:03.941336    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:03.941338    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:03.942986    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:04.438509    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:04.438533    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:04.438544    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:04.438552    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:04.441587    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:04.442338    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:04.442346    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:04.442351    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:04.442354    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:04.443865    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:04.939464    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:04.939517    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:04.939527    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:04.939530    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:04.941589    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:04.942132    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:04.942139    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:04.942144    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:04.942147    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:04.943787    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:05.437854    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:05.437866    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:05.437872    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:05.437875    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:05.439895    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:05.440295    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:05.440303    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:05.440308    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:05.440312    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:05.441766    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:05.442130    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:05.937813    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:05.937871    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:05.937882    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:05.937888    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:05.940367    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:05.940885    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:05.940892    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:05.940898    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:05.940902    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:05.942721    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:06.438966    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:06.438991    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:06.439007    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:06.439020    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:06.442137    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:06.442785    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:06.442793    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:06.442799    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:06.442802    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:06.444436    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:06.938695    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:06.938714    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:06.938723    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:06.938727    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:06.941327    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:06.941790    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:06.941798    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:06.941802    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:06.941805    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:06.943432    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:07.438469    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:07.438553    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:07.438567    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:07.438573    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:07.442157    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:07.442736    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:07.442744    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:07.442750    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:07.442754    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:07.444281    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:07.444696    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:07.937804    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:07.937815    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:07.937821    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:07.937823    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:07.939794    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:07.940418    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:07.940426    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:07.940432    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:07.940435    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:07.942179    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:08.437799    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:08.437814    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:08.437821    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:08.437827    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:08.440300    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:08.440760    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:08.440768    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:08.440773    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:08.440776    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:08.442402    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:08.938764    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:08.938789    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:08.938896    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:08.938909    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:08.942041    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:08.942737    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:08.942744    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:08.942751    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:08.942754    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:08.944691    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:09.437781    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:09.437795    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:09.437802    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:09.437807    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:09.440310    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:09.440716    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:09.440725    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:09.440731    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:09.440741    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:09.442571    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:09.937834    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:09.937847    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:09.937853    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:09.937856    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:09.939731    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:09.940144    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:09.940153    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:09.940159    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:09.940163    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:09.941982    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:09.942266    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:10.438403    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:10.438414    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:10.438421    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:10.438424    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:10.440749    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:10.441120    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:10.441127    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:10.441133    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:10.441138    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:10.442757    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:10.939169    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:10.939227    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:10.939238    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:10.939244    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:10.942004    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:10.942575    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:10.942582    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:10.942588    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:10.942591    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:10.944436    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:11.438251    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:11.438276    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:11.438353    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:11.438364    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:11.441421    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:11.441961    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:11.441969    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:11.441975    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:11.441979    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:11.446242    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:39:11.938022    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:11.938033    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:11.938040    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:11.938044    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:11.939924    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:11.940511    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:11.940519    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:11.940525    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:11.940528    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:11.942450    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:11.942833    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:12.439246    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:12.439269    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:12.439279    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:12.439285    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:12.442445    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:12.443020    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:12.443027    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:12.443033    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:12.443037    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:12.444778    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:12.939028    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:12.939059    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:12.939075    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:12.939144    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:12.941663    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:12.942169    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:12.942176    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:12.942182    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:12.942198    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:12.944174    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:13.439017    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:13.439030    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:13.439036    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:13.439039    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:13.441436    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:13.442003    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:13.442011    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:13.442017    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:13.442020    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:13.443715    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:13.939125    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:13.939138    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:13.939150    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:13.939154    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:13.941396    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:13.942124    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:13.942133    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:13.942138    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:13.942141    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:13.943860    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:13.944207    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:14.439525    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:14.439539    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:14.439545    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:14.439549    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:14.441636    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:14.442072    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:14.442080    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:14.442085    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:14.442088    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:14.443727    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:14.938392    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:14.938412    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:14.938425    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:14.938431    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:14.941839    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:14.942527    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:14.942535    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:14.942541    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:14.942556    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:14.944390    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:15.439124    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:15.439154    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:15.439236    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:15.439243    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:15.442572    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:15.443123    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:15.443133    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:15.443141    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:15.443145    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:15.445133    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:15.938789    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:15.938855    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:15.938870    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:15.938877    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:15.941774    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:15.942286    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:15.942294    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:15.942300    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:15.942304    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:15.944348    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:15.944660    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:16.439349    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:16.439368    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:16.439378    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:16.439383    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:16.441938    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:16.442524    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:16.442532    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:16.442537    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:16.442548    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:16.444186    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:16.938018    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:16.938067    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:16.938075    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:16.938081    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:16.940227    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:16.940771    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:16.940780    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:16.940785    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:16.940789    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:16.942609    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:17.438002    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:17.438028    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:17.438034    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:17.438038    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:17.440220    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:17.440724    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:17.440733    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:17.440739    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:17.440742    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:17.442604    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:17.938219    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:17.938237    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:17.938249    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:17.938255    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:17.941281    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:17.941690    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:17.941698    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:17.941703    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:17.941707    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:17.943715    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:18.439167    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:18.439186    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:18.439195    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:18.439200    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:18.441725    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:18.442096    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:18.442104    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:18.442109    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:18.442113    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:18.443738    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:18.444159    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:18.939393    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:18.939469    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:18.939479    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:18.939485    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:18.941987    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:18.942423    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:18.942431    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:18.942436    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:18.942439    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:18.944249    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:19.438795    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:19.438808    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.438814    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.438816    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.441023    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:19.441456    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:19.441464    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.441470    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.441475    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.443744    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:19.444095    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.444104    3636 pod_ready.go:81] duration metric: took 48.006189425s for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.444111    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.444150    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9dzd5
	I0717 10:39:19.444154    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.444160    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.444165    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.447092    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:19.447847    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:19.447856    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.447861    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.447865    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.449618    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:19.449899    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.449908    3636 pod_ready.go:81] duration metric: took 5.792129ms for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.449915    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.449950    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000
	I0717 10:39:19.449955    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.449961    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.449966    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.451887    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:19.452242    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:19.452249    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.452255    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.452259    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.455734    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:19.456038    3636 pod_ready.go:92] pod "etcd-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.456048    3636 pod_ready.go:81] duration metric: took 6.128452ms for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.456055    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.456091    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m02
	I0717 10:39:19.456096    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.456102    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.456104    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.459121    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:19.459474    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:19.459482    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.459487    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.459491    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.461049    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:19.461321    3636 pod_ready.go:92] pod "etcd-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.461330    3636 pod_ready.go:81] duration metric: took 5.269541ms for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.461337    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.461367    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m03
	I0717 10:39:19.461373    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.461378    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.461381    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.463280    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:19.463738    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:19.463745    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.463750    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.463754    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.466609    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:19.466864    3636 pod_ready.go:92] pod "etcd-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.466874    3636 pod_ready.go:81] duration metric: took 5.532002ms for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.466885    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.640514    3636 request.go:629] Waited for 173.589043ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:39:19.640593    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:39:19.640602    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.640610    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.640614    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.643241    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:19.839100    3636 request.go:629] Waited for 195.343311ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:19.839145    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:19.839152    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.839188    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.839194    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.845230    3636 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 10:39:19.845548    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.845558    3636 pod_ready.go:81] duration metric: took 378.657463ms for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.845565    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:20.040239    3636 request.go:629] Waited for 194.632219ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:39:20.040319    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:39:20.040328    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:20.040336    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:20.040342    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:20.042714    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:20.240297    3636 request.go:629] Waited for 196.995157ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:20.240378    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:20.240384    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:20.240390    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:20.240396    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:20.242369    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:20.242695    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:20.242704    3636 pod_ready.go:81] duration metric: took 397.124019ms for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:20.242711    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:20.439359    3636 request.go:629] Waited for 196.544114ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:39:20.439408    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:39:20.439416    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:20.439427    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:20.439434    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:20.442435    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:20.638955    3636 request.go:629] Waited for 196.048572ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:20.639046    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:20.639056    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:20.639068    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:20.639075    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:20.642008    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:20.642430    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:20.642442    3636 pod_ready.go:81] duration metric: took 399.714561ms for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:20.642451    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:20.838986    3636 request.go:629] Waited for 196.455933ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:39:20.839106    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:39:20.839119    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:20.839131    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:20.839141    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:20.842621    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:21.039118    3636 request.go:629] Waited for 195.900542ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:21.039165    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:21.039176    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:21.039188    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:21.039196    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:21.042149    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:21.042711    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:21.042741    3636 pod_ready.go:81] duration metric: took 400.268935ms for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:21.042748    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:21.238981    3636 request.go:629] Waited for 196.178207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:39:21.239040    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:39:21.239051    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:21.239063    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:21.239071    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:21.242170    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:21.440519    3636 request.go:629] Waited for 197.63517ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:21.440569    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:21.440581    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:21.440597    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:21.440606    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:21.443784    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:21.444203    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:21.444212    3636 pod_ready.go:81] duration metric: took 401.448672ms for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:21.444219    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:21.640166    3636 request.go:629] Waited for 195.890355ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:39:21.640224    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:39:21.640235    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:21.640246    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:21.640254    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:21.643178    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:21.840025    3636 request.go:629] Waited for 196.38625ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:21.840077    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:21.840087    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:21.840099    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:21.840107    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:21.842881    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:21.843340    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:21.843349    3636 pod_ready.go:81] duration metric: took 399.115148ms for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:21.843356    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:22.038929    3636 request.go:629] Waited for 195.527396ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:39:22.038980    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:39:22.038991    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:22.039000    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:22.039006    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:22.041797    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:22.239447    3636 request.go:629] Waited for 196.85315ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:39:22.239496    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:39:22.239504    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:22.239515    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:22.239525    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:22.242443    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:22.242932    3636 pod_ready.go:97] node "ha-572000-m04" hosting pod "kube-proxy-5wcph" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-572000-m04" has status "Ready":"Unknown"
	I0717 10:39:22.242948    3636 pod_ready.go:81] duration metric: took 399.575996ms for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	E0717 10:39:22.242956    3636 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-572000-m04" hosting pod "kube-proxy-5wcph" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-572000-m04" has status "Ready":"Unknown"
	I0717 10:39:22.242964    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:22.439269    3636 request.go:629] Waited for 196.255356ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:39:22.439393    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:39:22.439403    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:22.439414    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:22.439420    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:22.442456    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:22.640394    3636 request.go:629] Waited for 197.266214ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:22.640491    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:22.640500    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:22.640509    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:22.640514    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:22.643031    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:22.643471    3636 pod_ready.go:92] pod "kube-proxy-h7k9z" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:22.643480    3636 pod_ready.go:81] duration metric: took 400.50076ms for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:22.643487    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:22.839377    3636 request.go:629] Waited for 195.844443ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:39:22.839468    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:39:22.839477    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:22.839485    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:22.839491    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:22.841921    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:23.039004    3636 request.go:629] Waited for 196.604394ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:23.039109    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:23.039120    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:23.039131    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:23.039138    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:23.042022    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:23.042449    3636 pod_ready.go:92] pod "kube-proxy-hst7h" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:23.042462    3636 pod_ready.go:81] duration metric: took 398.959822ms for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:23.042480    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:23.240001    3636 request.go:629] Waited for 197.469314ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:39:23.240093    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:39:23.240110    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:23.240121    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:23.240131    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:23.243284    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:23.439300    3636 request.go:629] Waited for 195.300943ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:23.439332    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:23.439336    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:23.439343    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:23.439370    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:23.441287    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:23.441722    3636 pod_ready.go:92] pod "kube-proxy-v6jxh" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:23.441732    3636 pod_ready.go:81] duration metric: took 399.23495ms for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:23.441739    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:23.638943    3636 request.go:629] Waited for 197.165268ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:39:23.639000    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:39:23.639006    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:23.639012    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:23.639017    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:23.641044    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:23.840535    3636 request.go:629] Waited for 199.126882ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:23.840627    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:23.840639    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:23.840679    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:23.840691    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:23.843464    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:23.843963    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:23.843976    3636 pod_ready.go:81] duration metric: took 402.220047ms for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:23.843984    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:24.039540    3636 request.go:629] Waited for 195.50331ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:39:24.039598    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:39:24.039670    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.039685    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.039691    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.042477    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:24.239459    3636 request.go:629] Waited for 196.457492ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:24.239561    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:24.239573    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.239585    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.239591    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.242659    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:24.243312    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:24.243327    3636 pod_ready.go:81] duration metric: took 399.325407ms for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:24.243336    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:24.439080    3636 request.go:629] Waited for 195.673891ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:39:24.439191    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:39:24.439202    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.439213    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.439223    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.443262    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:39:24.639182    3636 request.go:629] Waited for 195.517919ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:24.639292    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:24.639303    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.639316    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.639324    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.642200    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:24.642657    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:24.642666    3636 pod_ready.go:81] duration metric: took 399.31371ms for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:24.642674    3636 pod_ready.go:38] duration metric: took 53.219035328s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:39:24.642686    3636 api_server.go:52] waiting for apiserver process to appear ...
	I0717 10:39:24.642749    3636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:39:24.655291    3636 api_server.go:72] duration metric: took 53.415271815s to wait for apiserver process to appear ...
	I0717 10:39:24.655303    3636 api_server.go:88] waiting for apiserver healthz status ...
	I0717 10:39:24.655313    3636 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0717 10:39:24.659504    3636 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0717 10:39:24.659539    3636 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0717 10:39:24.659544    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.659549    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.659552    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.660035    3636 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:39:24.660129    3636 api_server.go:141] control plane version: v1.30.2
	I0717 10:39:24.660138    3636 api_server.go:131] duration metric: took 4.830633ms to wait for apiserver health ...
	I0717 10:39:24.660142    3636 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 10:39:24.840282    3636 request.go:629] Waited for 180.099076ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:39:24.840353    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:39:24.840361    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.840369    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.840373    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.845121    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:39:24.850038    3636 system_pods.go:59] 26 kube-system pods found
	I0717 10:39:24.850051    3636 system_pods.go:61] "coredns-7db6d8ff4d-2phrp" [e76a91cf-8a14-454c-baee-7c0128645e0e] Running
	I0717 10:39:24.850054    3636 system_pods.go:61] "coredns-7db6d8ff4d-9dzd5" [f9a7cff1-56a3-4600-ae2f-a951dda10753] Running
	I0717 10:39:24.850057    3636 system_pods.go:61] "etcd-ha-572000" [2d7b717e-d404-4c63-afe9-799de6711964] Running
	I0717 10:39:24.850060    3636 system_pods.go:61] "etcd-ha-572000-m02" [8abc7662-c159-4953-9aba-11a75b4e7d65] Running
	I0717 10:39:24.850062    3636 system_pods.go:61] "etcd-ha-572000-m03" [78629908-d362-4a27-933f-2b929867c22f] Running
	I0717 10:39:24.850065    3636 system_pods.go:61] "kindnet-5xsrp" [0b84aa4d-7452-47f6-8052-f063e2e8ef7b] Running
	I0717 10:39:24.850067    3636 system_pods.go:61] "kindnet-72zfp" [0cc735c4-08b3-499a-b9e2-8c38377f371b] Running
	I0717 10:39:24.850069    3636 system_pods.go:61] "kindnet-g2m92" [be3d84cf-f9e8-426c-b02a-49eb7eed9d6a] Running
	I0717 10:39:24.850071    3636 system_pods.go:61] "kindnet-t85bv" [8649cc70-8ce8-4caf-938e-bd253fa5b7ae] Running
	I0717 10:39:24.850074    3636 system_pods.go:61] "kube-apiserver-ha-572000" [6409591d-7a50-414b-be63-44d5ddb0b0e0] Running
	I0717 10:39:24.850076    3636 system_pods.go:61] "kube-apiserver-ha-572000-m02" [d658459c-67f9-4798-87f2-c651d830af35] Running
	I0717 10:39:24.850078    3636 system_pods.go:61] "kube-apiserver-ha-572000-m03" [ac1c02f1-f023-4ad2-99bc-2657fb9d50d7] Running
	I0717 10:39:24.850081    3636 system_pods.go:61] "kube-controller-manager-ha-572000" [075c4d23-04bd-404a-822d-ae7d326a68ac] Running
	I0717 10:39:24.850084    3636 system_pods.go:61] "kube-controller-manager-ha-572000-m02" [3014bdb3-bb86-48d5-a045-2a8dab026bd8] Running
	I0717 10:39:24.850086    3636 system_pods.go:61] "kube-controller-manager-ha-572000-m03" [3b76e220-3a5b-4dac-8f69-6b380799d2ef] Running
	I0717 10:39:24.850088    3636 system_pods.go:61] "kube-proxy-5wcph" [731f5b57-131e-4e97-b47a-036b8d4edbcd] Running
	I0717 10:39:24.850105    3636 system_pods.go:61] "kube-proxy-h7k9z" [cf687084-b40d-43b6-9476-8c60c5f37d1d] Running
	I0717 10:39:24.850110    3636 system_pods.go:61] "kube-proxy-hst7h" [2b7c8d2c-3d71-4357-9429-1d8438c446f5] Running
	I0717 10:39:24.850113    3636 system_pods.go:61] "kube-proxy-v6jxh" [3f952fc8-747f-49da-b400-5212dba538d8] Running
	I0717 10:39:24.850116    3636 system_pods.go:61] "kube-scheduler-ha-572000" [da36a422-ba49-4cd6-8f47-9be7c43657be] Running
	I0717 10:39:24.850118    3636 system_pods.go:61] "kube-scheduler-ha-572000-m02" [13e289a2-8d3c-478f-9220-1d114dd5bf62] Running
	I0717 10:39:24.850121    3636 system_pods.go:61] "kube-scheduler-ha-572000-m03" [428716db-8096-4b57-9f90-e706d5683852] Running
	I0717 10:39:24.850124    3636 system_pods.go:61] "kube-vip-ha-572000" [57c4f54d-e859-4429-90dd-0402a9e73727] Running
	I0717 10:39:24.850127    3636 system_pods.go:61] "kube-vip-ha-572000-m02" [fa57b8d2-bc41-42af-9d9a-1b68f882e6fe] Running
	I0717 10:39:24.850129    3636 system_pods.go:61] "kube-vip-ha-572000-m03" [a35c5007-dfe3-4b58-b16c-d568b8f9c1c2] Running
	I0717 10:39:24.850133    3636 system_pods.go:61] "storage-provisioner" [1801216d-6529-4049-b874-1577132fc03f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 10:39:24.850139    3636 system_pods.go:74] duration metric: took 189.987862ms to wait for pod list to return data ...
	I0717 10:39:24.850145    3636 default_sa.go:34] waiting for default service account to be created ...
	I0717 10:39:25.040731    3636 request.go:629] Waited for 190.528349ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0717 10:39:25.040830    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0717 10:39:25.040841    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:25.040852    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:25.040860    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:25.044018    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:25.044088    3636 default_sa.go:45] found service account: "default"
	I0717 10:39:25.044097    3636 default_sa.go:55] duration metric: took 193.941803ms for default service account to be created ...
	I0717 10:39:25.044103    3636 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 10:39:25.240503    3636 request.go:629] Waited for 196.351718ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:39:25.240543    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:39:25.240548    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:25.240554    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:25.240583    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:25.244975    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:39:25.249908    3636 system_pods.go:86] 26 kube-system pods found
	I0717 10:39:25.249919    3636 system_pods.go:89] "coredns-7db6d8ff4d-2phrp" [e76a91cf-8a14-454c-baee-7c0128645e0e] Running
	I0717 10:39:25.249923    3636 system_pods.go:89] "coredns-7db6d8ff4d-9dzd5" [f9a7cff1-56a3-4600-ae2f-a951dda10753] Running
	I0717 10:39:25.249940    3636 system_pods.go:89] "etcd-ha-572000" [2d7b717e-d404-4c63-afe9-799de6711964] Running
	I0717 10:39:25.249944    3636 system_pods.go:89] "etcd-ha-572000-m02" [8abc7662-c159-4953-9aba-11a75b4e7d65] Running
	I0717 10:39:25.249948    3636 system_pods.go:89] "etcd-ha-572000-m03" [78629908-d362-4a27-933f-2b929867c22f] Running
	I0717 10:39:25.249951    3636 system_pods.go:89] "kindnet-5xsrp" [0b84aa4d-7452-47f6-8052-f063e2e8ef7b] Running
	I0717 10:39:25.249955    3636 system_pods.go:89] "kindnet-72zfp" [0cc735c4-08b3-499a-b9e2-8c38377f371b] Running
	I0717 10:39:25.249959    3636 system_pods.go:89] "kindnet-g2m92" [be3d84cf-f9e8-426c-b02a-49eb7eed9d6a] Running
	I0717 10:39:25.249962    3636 system_pods.go:89] "kindnet-t85bv" [8649cc70-8ce8-4caf-938e-bd253fa5b7ae] Running
	I0717 10:39:25.249966    3636 system_pods.go:89] "kube-apiserver-ha-572000" [6409591d-7a50-414b-be63-44d5ddb0b0e0] Running
	I0717 10:39:25.249969    3636 system_pods.go:89] "kube-apiserver-ha-572000-m02" [d658459c-67f9-4798-87f2-c651d830af35] Running
	I0717 10:39:25.249973    3636 system_pods.go:89] "kube-apiserver-ha-572000-m03" [ac1c02f1-f023-4ad2-99bc-2657fb9d50d7] Running
	I0717 10:39:25.249976    3636 system_pods.go:89] "kube-controller-manager-ha-572000" [075c4d23-04bd-404a-822d-ae7d326a68ac] Running
	I0717 10:39:25.249979    3636 system_pods.go:89] "kube-controller-manager-ha-572000-m02" [3014bdb3-bb86-48d5-a045-2a8dab026bd8] Running
	I0717 10:39:25.249983    3636 system_pods.go:89] "kube-controller-manager-ha-572000-m03" [3b76e220-3a5b-4dac-8f69-6b380799d2ef] Running
	I0717 10:39:25.249987    3636 system_pods.go:89] "kube-proxy-5wcph" [731f5b57-131e-4e97-b47a-036b8d4edbcd] Running
	I0717 10:39:25.249990    3636 system_pods.go:89] "kube-proxy-h7k9z" [cf687084-b40d-43b6-9476-8c60c5f37d1d] Running
	I0717 10:39:25.249994    3636 system_pods.go:89] "kube-proxy-hst7h" [2b7c8d2c-3d71-4357-9429-1d8438c446f5] Running
	I0717 10:39:25.249997    3636 system_pods.go:89] "kube-proxy-v6jxh" [3f952fc8-747f-49da-b400-5212dba538d8] Running
	I0717 10:39:25.250001    3636 system_pods.go:89] "kube-scheduler-ha-572000" [da36a422-ba49-4cd6-8f47-9be7c43657be] Running
	I0717 10:39:25.250005    3636 system_pods.go:89] "kube-scheduler-ha-572000-m02" [13e289a2-8d3c-478f-9220-1d114dd5bf62] Running
	I0717 10:39:25.250008    3636 system_pods.go:89] "kube-scheduler-ha-572000-m03" [428716db-8096-4b57-9f90-e706d5683852] Running
	I0717 10:39:25.250012    3636 system_pods.go:89] "kube-vip-ha-572000" [57c4f54d-e859-4429-90dd-0402a9e73727] Running
	I0717 10:39:25.250019    3636 system_pods.go:89] "kube-vip-ha-572000-m02" [fa57b8d2-bc41-42af-9d9a-1b68f882e6fe] Running
	I0717 10:39:25.250026    3636 system_pods.go:89] "kube-vip-ha-572000-m03" [a35c5007-dfe3-4b58-b16c-d568b8f9c1c2] Running
	I0717 10:39:25.250031    3636 system_pods.go:89] "storage-provisioner" [1801216d-6529-4049-b874-1577132fc03f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 10:39:25.250037    3636 system_pods.go:126] duration metric: took 205.924043ms to wait for k8s-apps to be running ...
	I0717 10:39:25.250043    3636 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 10:39:25.250097    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:39:25.260730    3636 system_svc.go:56] duration metric: took 10.680441ms WaitForService to wait for kubelet
	I0717 10:39:25.260752    3636 kubeadm.go:582] duration metric: took 54.020711767s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:39:25.260767    3636 node_conditions.go:102] verifying NodePressure condition ...
	I0717 10:39:25.440260    3636 request.go:629] Waited for 179.444294ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0717 10:39:25.440305    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0717 10:39:25.440313    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:25.440326    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:25.440335    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:25.443664    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:25.444820    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:39:25.444830    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:39:25.444839    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:39:25.444842    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:39:25.444845    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:39:25.444848    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:39:25.444851    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:39:25.444854    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:39:25.444857    3636 node_conditions.go:105] duration metric: took 184.081224ms to run NodePressure ...
	I0717 10:39:25.444866    3636 start.go:241] waiting for startup goroutines ...
	I0717 10:39:25.444881    3636 start.go:255] writing updated cluster config ...
	I0717 10:39:25.466841    3636 out.go:177] 
	I0717 10:39:25.488444    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:39:25.488557    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:39:25.511165    3636 out.go:177] * Starting "ha-572000-m04" worker node in "ha-572000" cluster
	I0717 10:39:25.553049    3636 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:39:25.553078    3636 cache.go:56] Caching tarball of preloaded images
	I0717 10:39:25.553293    3636 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:39:25.553311    3636 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:39:25.553441    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:39:25.554263    3636 start.go:360] acquireMachinesLock for ha-572000-m04: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:39:25.554357    3636 start.go:364] duration metric: took 71.034µs to acquireMachinesLock for "ha-572000-m04"
	I0717 10:39:25.554380    3636 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:39:25.554388    3636 fix.go:54] fixHost starting: m04
	I0717 10:39:25.554780    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:39:25.554805    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:39:25.564043    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52015
	I0717 10:39:25.564385    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:39:25.564752    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:39:25.564769    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:39:25.564963    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:39:25.565075    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:39:25.565158    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetState
	I0717 10:39:25.565257    3636 main.go:141] libmachine: (ha-572000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:39:25.565368    3636 main.go:141] libmachine: (ha-572000-m04) DBG | hyperkit pid from json: 3096
	I0717 10:39:25.566303    3636 main.go:141] libmachine: (ha-572000-m04) DBG | hyperkit pid 3096 missing from process table
	I0717 10:39:25.566325    3636 fix.go:112] recreateIfNeeded on ha-572000-m04: state=Stopped err=<nil>
	I0717 10:39:25.566334    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	W0717 10:39:25.566413    3636 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:39:25.587318    3636 out.go:177] * Restarting existing hyperkit VM for "ha-572000-m04" ...
	I0717 10:39:25.629121    3636 main.go:141] libmachine: (ha-572000-m04) Calling .Start
	I0717 10:39:25.629280    3636 main.go:141] libmachine: (ha-572000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:39:25.629323    3636 main.go:141] libmachine: (ha-572000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/hyperkit.pid
	I0717 10:39:25.629373    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Using UUID d62b35de-5f9d-4091-a1f9-ae55052b3d93
	I0717 10:39:25.659758    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Generated MAC 1e:37:45:6a:f1:7f
	I0717 10:39:25.659780    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:39:25.659921    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d62b35de-5f9d-4091-a1f9-ae55052b3d93", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002879b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:39:25.659979    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d62b35de-5f9d-4091-a1f9-ae55052b3d93", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002879b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:39:25.660027    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "d62b35de-5f9d-4091-a1f9-ae55052b3d93", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/ha-572000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machine
s/ha-572000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:39:25.660072    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U d62b35de-5f9d-4091-a1f9-ae55052b3d93 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/ha-572000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:39:25.660086    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:39:25.661465    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: Pid is 3683
	I0717 10:39:25.661986    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Attempt 0
	I0717 10:39:25.661995    3636 main.go:141] libmachine: (ha-572000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:39:25.662068    3636 main.go:141] libmachine: (ha-572000-m04) DBG | hyperkit pid from json: 3683
	I0717 10:39:25.664876    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Searching for 1e:37:45:6a:f1:7f in /var/db/dhcpd_leases ...
	I0717 10:39:25.665000    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:39:25.665028    3636 main.go:141] libmachine: (ha-572000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:d3:62:da:43:cf ID:1,6e:d3:62:da:43:cf Lease:0x6699530d}
	I0717 10:39:25.665090    3636 main.go:141] libmachine: (ha-572000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x669952ec}
	I0717 10:39:25.665098    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetConfigRaw
	I0717 10:39:25.665107    3636 main.go:141] libmachine: (ha-572000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669952da}
	I0717 10:39:25.665121    3636 main.go:141] libmachine: (ha-572000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:37:45:6a:f1:7f ID:1,1e:37:45:6a:f1:7f Lease:0x6698001b}
	I0717 10:39:25.665133    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Found match: 1e:37:45:6a:f1:7f
	I0717 10:39:25.665155    3636 main.go:141] libmachine: (ha-572000-m04) DBG | IP: 192.169.0.8
	I0717 10:39:25.665871    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetIP
	I0717 10:39:25.666075    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:39:25.666480    3636 machine.go:94] provisionDockerMachine start ...
	I0717 10:39:25.666492    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:39:25.666622    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:39:25.666758    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:39:25.666855    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:39:25.666997    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:39:25.667100    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:39:25.667218    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:39:25.667397    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:39:25.667404    3636 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:39:25.669640    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:39:25.678044    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:39:25.679048    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:39:25.679102    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:39:25.679117    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:39:25.679129    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:39:26.061153    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:39:26.061169    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:39:26.176025    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:39:26.176085    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:39:26.176109    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:39:26.176141    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:39:26.176817    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:39:26.176827    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:39:31.459017    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:31 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:39:31.459116    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:31 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:39:31.459128    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:31 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:39:31.482911    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:31 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:40:00.729304    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:40:00.729320    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetMachineName
	I0717 10:40:00.729447    3636 buildroot.go:166] provisioning hostname "ha-572000-m04"
	I0717 10:40:00.729459    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetMachineName
	I0717 10:40:00.729548    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:00.729650    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:00.729752    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:00.729829    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:00.729922    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:00.730060    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:00.730229    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:00.730238    3636 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000-m04 && echo "ha-572000-m04" | sudo tee /etc/hostname
	I0717 10:40:00.792250    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000-m04
	
	I0717 10:40:00.792267    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:00.792395    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:00.792496    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:00.792601    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:00.792686    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:00.792813    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:00.792953    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:00.792965    3636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:40:00.851570    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:40:00.851592    3636 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:40:00.851608    3636 buildroot.go:174] setting up certificates
	I0717 10:40:00.851614    3636 provision.go:84] configureAuth start
	I0717 10:40:00.851621    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetMachineName
	I0717 10:40:00.851754    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetIP
	I0717 10:40:00.851843    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:00.851935    3636 provision.go:143] copyHostCerts
	I0717 10:40:00.851965    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:40:00.852026    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:40:00.852032    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:40:00.852183    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:40:00.852421    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:40:00.852465    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:40:00.852470    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:40:00.852549    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:40:00.852695    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:40:00.852734    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:40:00.852739    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:40:00.852814    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:40:00.852963    3636 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000-m04 san=[127.0.0.1 192.169.0.8 ha-572000-m04 localhost minikube]
	I0717 10:40:01.012731    3636 provision.go:177] copyRemoteCerts
	I0717 10:40:01.012781    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:40:01.012796    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:01.012945    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:01.013036    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.013118    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:01.013205    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	I0717 10:40:01.045440    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:40:01.045513    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 10:40:01.065877    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:40:01.065952    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:40:01.086341    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:40:01.086417    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:40:01.107237    3636 provision.go:87] duration metric: took 255.607467ms to configureAuth
	I0717 10:40:01.107252    3636 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:40:01.107441    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:40:01.107470    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:01.107602    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:01.107691    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:01.107775    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.107862    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.107936    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:01.108052    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:01.108176    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:01.108184    3636 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:40:01.159812    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:40:01.159826    3636 buildroot.go:70] root file system type: tmpfs
	I0717 10:40:01.159906    3636 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:40:01.159918    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:01.160045    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:01.160133    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.160218    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.160312    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:01.160436    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:01.160588    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:01.160638    3636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:40:01.222986    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:40:01.223013    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:01.223158    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:01.223263    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.223339    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.223425    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:01.223557    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:01.223705    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:01.223717    3636 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:40:02.793231    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:40:02.793247    3636 machine.go:97] duration metric: took 37.125816173s to provisionDockerMachine
	I0717 10:40:02.793256    3636 start.go:293] postStartSetup for "ha-572000-m04" (driver="hyperkit")
	I0717 10:40:02.793263    3636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:40:02.793273    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:02.793461    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:40:02.793475    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:02.793570    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:02.793662    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:02.793746    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:02.793821    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	I0717 10:40:02.826174    3636 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:40:02.829517    3636 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:40:02.829527    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:40:02.829627    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:40:02.829814    3636 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:40:02.829820    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:40:02.830025    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:40:02.837723    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:40:02.858109    3636 start.go:296] duration metric: took 64.843134ms for postStartSetup
	I0717 10:40:02.858164    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:02.858343    3636 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:40:02.858357    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:02.858452    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:02.858535    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:02.858625    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:02.858709    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	I0717 10:40:02.891466    3636 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:40:02.891526    3636 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:40:02.924508    3636 fix.go:56] duration metric: took 37.369170253s for fixHost
	I0717 10:40:02.924533    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:02.924664    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:02.924753    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:02.924844    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:02.924927    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:02.925043    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:02.925181    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:02.925189    3636 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:40:02.979156    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721238002.907586801
	
	I0717 10:40:02.979168    3636 fix.go:216] guest clock: 1721238002.907586801
	I0717 10:40:02.979174    3636 fix.go:229] Guest: 2024-07-17 10:40:02.907586801 -0700 PDT Remote: 2024-07-17 10:40:02.924523 -0700 PDT m=+161.794729692 (delta=-16.936199ms)
	I0717 10:40:02.979185    3636 fix.go:200] guest clock delta is within tolerance: -16.936199ms
	I0717 10:40:02.979189    3636 start.go:83] releasing machines lock for "ha-572000-m04", held for 37.423872596s
	I0717 10:40:02.979207    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:02.979341    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetIP
	I0717 10:40:03.002677    3636 out.go:177] * Found network options:
	I0717 10:40:03.023433    3636 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0717 10:40:03.044600    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:40:03.044630    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:40:03.044645    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:40:03.044662    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:03.045380    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:03.045584    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:03.045691    3636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:40:03.045739    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	W0717 10:40:03.045803    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:40:03.045829    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:40:03.045847    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:40:03.045916    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:03.045932    3636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:40:03.045950    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:03.046116    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:03.046197    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:03.046277    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:03.046336    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:03.046416    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	I0717 10:40:03.046472    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:03.046583    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	W0717 10:40:03.078338    3636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:40:03.078404    3636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:40:03.127460    3636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:40:03.127478    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:40:03.127562    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:40:03.143174    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:40:03.152039    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:40:03.160575    3636 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:40:03.160636    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:40:03.169267    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:40:03.178061    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:40:03.186799    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:40:03.195713    3636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:40:03.205361    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:40:03.214887    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:40:03.223632    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:40:03.232306    3636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:40:03.240303    3636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:40:03.248146    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:03.349118    3636 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:40:03.368632    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:40:03.368697    3636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:40:03.382935    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:40:03.394904    3636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:40:03.408677    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:40:03.424538    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:40:03.436679    3636 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:40:03.457267    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:40:03.468621    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:40:03.484458    3636 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:40:03.487477    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:40:03.495866    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:40:03.509467    3636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:40:03.610005    3636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:40:03.711300    3636 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:40:03.711330    3636 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:40:03.725314    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:03.818685    3636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:40:06.069148    3636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.250387117s)
	I0717 10:40:06.069225    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:40:06.080064    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:40:06.090634    3636 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:40:06.182522    3636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:40:06.285041    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:06.397211    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:40:06.410586    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:40:06.421941    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:06.525211    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:40:06.593566    3636 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:40:06.593658    3636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:40:06.598237    3636 start.go:563] Will wait 60s for crictl version
	I0717 10:40:06.598298    3636 ssh_runner.go:195] Run: which crictl
	I0717 10:40:06.601369    3636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:40:06.630287    3636 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:40:06.630357    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:40:06.648217    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:40:06.713331    3636 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:40:06.734501    3636 out.go:177]   - env NO_PROXY=192.169.0.5
	I0717 10:40:06.755443    3636 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0717 10:40:06.776545    3636 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	I0717 10:40:06.797619    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetIP
	I0717 10:40:06.797849    3636 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:40:06.801369    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:40:06.811681    3636 mustload.go:65] Loading cluster: ha-572000
	I0717 10:40:06.811867    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:40:06.812096    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:40:06.812120    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:40:06.821106    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52038
	I0717 10:40:06.821460    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:40:06.821823    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:40:06.821839    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:40:06.822045    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:40:06.822158    3636 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:40:06.822237    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:40:06.822325    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3650
	I0717 10:40:06.823304    3636 host.go:66] Checking if "ha-572000" exists ...
	I0717 10:40:06.823558    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:40:06.823583    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:40:06.832052    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52040
	I0717 10:40:06.832422    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:40:06.832722    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:40:06.832733    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:40:06.832924    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:40:06.833068    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:40:06.833173    3636 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.8
	I0717 10:40:06.833178    3636 certs.go:194] generating shared ca certs ...
	I0717 10:40:06.833187    3636 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:40:06.833369    3636 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:40:06.833445    3636 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:40:06.833455    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:40:06.833477    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:40:06.833496    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:40:06.833513    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:40:06.833602    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:40:06.833654    3636 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:40:06.833664    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:40:06.833699    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:40:06.833731    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:40:06.833765    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:40:06.833830    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:40:06.833866    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:40:06.833895    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:40:06.833914    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:40:06.833943    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:40:06.854528    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:40:06.874473    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:40:06.894419    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:40:06.914655    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:40:06.934481    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:40:06.953938    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:40:06.973423    3636 ssh_runner.go:195] Run: openssl version
	I0717 10:40:06.977846    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:40:06.987226    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:40:06.990594    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:40:06.990633    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:40:06.994910    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:40:07.004316    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:40:07.013700    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:40:07.017207    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:40:07.017252    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:40:07.021661    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:40:07.030891    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:40:07.040013    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:40:07.043424    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:40:07.043460    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:40:07.048023    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:40:07.057292    3636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:40:07.060465    3636 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 10:40:07.060498    3636 kubeadm.go:934] updating node {m04 192.169.0.8 0 v1.30.2 docker false true} ...
	I0717 10:40:07.060568    3636 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:40:07.060612    3636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:40:07.068828    3636 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:40:07.068888    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0717 10:40:07.077989    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0717 10:40:07.091753    3636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:40:07.105613    3636 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0717 10:40:07.108527    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:40:07.118827    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:07.218618    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:40:07.232580    3636 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0717 10:40:07.232780    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:40:07.270354    3636 out.go:177] * Verifying Kubernetes components...
	I0717 10:40:07.343786    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:07.486955    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:40:07.502599    3636 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:40:07.502930    3636 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa0a8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 10:40:07.502990    3636 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0717 10:40:07.503236    3636 node_ready.go:35] waiting up to 6m0s for node "ha-572000-m04" to be "Ready" ...
	I0717 10:40:07.503290    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:40:07.503296    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.503303    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.503305    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.507147    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:07.507598    3636 node_ready.go:49] node "ha-572000-m04" has status "Ready":"True"
	I0717 10:40:07.507619    3636 node_ready.go:38] duration metric: took 4.370479ms for node "ha-572000-m04" to be "Ready" ...
	I0717 10:40:07.507631    3636 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:40:07.507695    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:40:07.507705    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.507714    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.507718    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.517761    3636 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0717 10:40:07.525740    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.525796    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:40:07.525804    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.525810    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.525815    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.527956    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:07.528370    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:07.528378    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.528384    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.528387    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.530521    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:07.530888    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:07.530899    3636 pod_ready.go:81] duration metric: took 5.142557ms for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.530907    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.530969    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9dzd5
	I0717 10:40:07.530978    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.530985    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.530990    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.533172    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:07.533578    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:07.533586    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.533592    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.533595    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.535152    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.535453    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:07.535462    3636 pod_ready.go:81] duration metric: took 4.549454ms for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.535469    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.535504    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000
	I0717 10:40:07.535509    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.535515    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.535519    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.537042    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.537410    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:07.537417    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.537423    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.537426    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.538975    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.539323    3636 pod_ready.go:92] pod "etcd-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:07.539331    3636 pod_ready.go:81] duration metric: took 3.856623ms for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.539337    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.539378    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m02
	I0717 10:40:07.539383    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.539389    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.539393    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.541081    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.541459    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:07.541467    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.541473    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.541476    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.542992    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.543383    3636 pod_ready.go:92] pod "etcd-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:07.543391    3636 pod_ready.go:81] duration metric: took 4.050033ms for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.543397    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.703505    3636 request.go:629] Waited for 160.066521ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m03
	I0717 10:40:07.703540    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m03
	I0717 10:40:07.703545    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.703551    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.703556    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.705548    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.903510    3636 request.go:629] Waited for 197.511686ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:07.903551    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:07.903556    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.903562    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.903601    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.905857    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:07.906157    3636 pod_ready.go:92] pod "etcd-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:07.906168    3636 pod_ready.go:81] duration metric: took 362.756768ms for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.906180    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:08.103966    3636 request.go:629] Waited for 197.743139ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:40:08.104021    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:40:08.104030    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:08.104037    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:08.104046    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:08.106066    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:08.303534    3636 request.go:629] Waited for 196.774341ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:08.303599    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:08.303671    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:08.303686    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:08.303697    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:08.306313    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:08.306837    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:08.306847    3636 pod_ready.go:81] duration metric: took 400.65093ms for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:08.306854    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:08.503920    3636 request.go:629] Waited for 197.018157ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:40:08.503964    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:40:08.503984    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:08.503990    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:08.503995    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:08.506056    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:08.703436    3636 request.go:629] Waited for 196.948288ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:08.703494    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:08.703500    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:08.703506    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:08.703511    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:08.705852    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:08.706163    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:08.706173    3636 pod_ready.go:81] duration metric: took 399.30321ms for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:08.706179    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:08.903771    3636 request.go:629] Waited for 197.50006ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:40:08.903806    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:40:08.903813    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:08.903820    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:08.903824    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:08.906399    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:09.104084    3636 request.go:629] Waited for 197.163497ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:09.104171    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:09.104176    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:09.104182    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:09.104187    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:09.106361    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:09.106707    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:09.106718    3636 pod_ready.go:81] duration metric: took 400.52413ms for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:09.106726    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:09.304052    3636 request.go:629] Waited for 197.283261ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:40:09.304088    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:40:09.304093    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:09.304130    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:09.304135    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:09.306083    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:09.504106    3636 request.go:629] Waited for 197.645757ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:09.504208    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:09.504220    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:09.504232    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:09.504240    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:09.511286    3636 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 10:40:09.511696    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:09.511709    3636 pod_ready.go:81] duration metric: took 404.967221ms for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:09.511716    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:09.703585    3636 request.go:629] Waited for 191.795231ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:40:09.703642    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:40:09.703653    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:09.703665    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:09.703671    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:09.706720    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:09.904070    3636 request.go:629] Waited for 196.771647ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:09.904118    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:09.904125    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:09.904134    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:09.904140    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:09.906439    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:09.906766    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:09.906776    3636 pod_ready.go:81] duration metric: took 395.046014ms for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:09.906787    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:10.104935    3636 request.go:629] Waited for 198.017235ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:40:10.105019    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:40:10.105031    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:10.105061    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:10.105068    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:10.108223    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:10.304013    3636 request.go:629] Waited for 195.251924ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:10.304073    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:10.304086    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:10.304097    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:10.304106    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:10.307327    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:10.307882    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:10.307891    3636 pod_ready.go:81] duration metric: took 401.08706ms for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:10.307899    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:10.504739    3636 request.go:629] Waited for 196.801571ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:40:10.504780    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:40:10.504821    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:10.504827    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:10.504831    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:10.506960    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:10.703733    3636 request.go:629] Waited for 196.095597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:40:10.703831    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:40:10.703840    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:10.703866    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:10.703875    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:10.706696    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:10.707101    3636 pod_ready.go:92] pod "kube-proxy-5wcph" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:10.707111    3636 pod_ready.go:81] duration metric: took 399.196595ms for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:10.707118    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:10.903773    3636 request.go:629] Waited for 196.61026ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:40:10.903910    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:40:10.903927    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:10.903945    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:10.903955    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:10.906117    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:11.104247    3636 request.go:629] Waited for 197.64653ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:11.104330    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:11.104339    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:11.104351    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:11.104362    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:11.107473    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:11.107930    3636 pod_ready.go:92] pod "kube-proxy-h7k9z" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:11.107945    3636 pod_ready.go:81] duration metric: took 400.810357ms for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:11.107954    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:11.304083    3636 request.go:629] Waited for 196.074281ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:40:11.304132    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:40:11.304139    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:11.304147    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:11.304151    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:11.306391    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:11.503460    3636 request.go:629] Waited for 196.558235ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:11.503507    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:11.503513    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:11.503519    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:11.503523    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:11.505457    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:11.505774    3636 pod_ready.go:92] pod "kube-proxy-hst7h" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:11.505785    3636 pod_ready.go:81] duration metric: took 397.815014ms for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:11.505792    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:11.704821    3636 request.go:629] Waited for 198.981688ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:40:11.704918    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:40:11.704927    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:11.704933    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:11.704936    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:11.707262    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:11.903612    3636 request.go:629] Waited for 195.874248ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:11.903682    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:11.903689    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:11.903696    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:11.903700    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:11.905982    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:11.906348    3636 pod_ready.go:92] pod "kube-proxy-v6jxh" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:11.906359    3636 pod_ready.go:81] duration metric: took 400.551047ms for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:11.906369    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:12.103492    3636 request.go:629] Waited for 197.075685ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:40:12.103568    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:40:12.103574    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:12.103580    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:12.103585    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:12.105506    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:12.303814    3636 request.go:629] Waited for 197.930746ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:12.303844    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:12.303850    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:12.303867    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:12.303874    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:12.305845    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:12.306164    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:12.306174    3636 pod_ready.go:81] duration metric: took 399.787712ms for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:12.306181    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:12.503949    3636 request.go:629] Waited for 197.718801ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:40:12.504068    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:40:12.504079    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:12.504087    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:12.504093    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:12.506372    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:12.704852    3636 request.go:629] Waited for 198.155745ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:12.704924    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:12.704932    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:12.704940    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:12.704944    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:12.707307    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:12.707616    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:12.707626    3636 pod_ready.go:81] duration metric: took 401.429815ms for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:12.707633    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:12.903728    3636 request.go:629] Waited for 196.035029ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:40:12.903828    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:40:12.903836    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:12.903842    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:12.903845    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:12.906224    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:13.103515    3636 request.go:629] Waited for 196.951957ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:13.103588    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:13.103593    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:13.103599    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:13.103603    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:13.105622    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:13.106020    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:13.106029    3636 pod_ready.go:81] duration metric: took 398.380033ms for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:13.106046    3636 pod_ready.go:38] duration metric: took 5.59825813s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:40:13.106061    3636 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 10:40:13.106113    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:40:13.116872    3636 system_svc.go:56] duration metric: took 10.807598ms WaitForService to wait for kubelet
	I0717 10:40:13.116887    3636 kubeadm.go:582] duration metric: took 5.884130758s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:40:13.116904    3636 node_conditions.go:102] verifying NodePressure condition ...
	I0717 10:40:13.303772    3636 request.go:629] Waited for 186.81691ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0717 10:40:13.303803    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0717 10:40:13.303807    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:13.303841    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:13.303846    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:13.306895    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:13.307714    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:40:13.307729    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:40:13.307740    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:40:13.307744    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:40:13.307748    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:40:13.307751    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:40:13.307757    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:40:13.307761    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:40:13.307764    3636 node_conditions.go:105] duration metric: took 190.851869ms to run NodePressure ...
	I0717 10:40:13.307772    3636 start.go:241] waiting for startup goroutines ...
	I0717 10:40:13.307786    3636 start.go:255] writing updated cluster config ...
	I0717 10:40:13.308139    3636 ssh_runner.go:195] Run: rm -f paused
	I0717 10:40:13.349733    3636 start.go:600] kubectl: 1.29.2, cluster: 1.30.2 (minor skew: 1)
	I0717 10:40:13.371543    3636 out.go:177] * Done! kubectl is now configured to use "ha-572000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 17 17:38:43 ha-572000 dockerd[1183]: time="2024-07-17T17:38:43.318326173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:38:43 ha-572000 dockerd[1183]: time="2024-07-17T17:38:43.318386099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:38:43 ha-572000 dockerd[1183]: time="2024-07-17T17:38:43.318398421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:43 ha-572000 dockerd[1183]: time="2024-07-17T17:38:43.318954035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:43 ha-572000 dockerd[1183]: time="2024-07-17T17:38:43.319450280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.340195606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.340255461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.340333620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.340397061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.341315078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.341404694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.341501856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.343515271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.343612113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.343637500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.343972230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.346166794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:45 ha-572000 dockerd[1183]: time="2024-07-17T17:38:45.310104278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:38:45 ha-572000 dockerd[1183]: time="2024-07-17T17:38:45.310177463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:38:45 ha-572000 dockerd[1183]: time="2024-07-17T17:38:45.310195349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:45 ha-572000 dockerd[1183]: time="2024-07-17T17:38:45.310377303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:39:13 ha-572000 dockerd[1176]: time="2024-07-17T17:39:13.526781737Z" level=info msg="ignoring event" container=a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 17:39:13 ha-572000 dockerd[1183]: time="2024-07-17T17:39:13.527422614Z" level=info msg="shim disconnected" id=a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403 namespace=moby
	Jul 17 17:39:13 ha-572000 dockerd[1183]: time="2024-07-17T17:39:13.527577585Z" level=warning msg="cleaning up after shim disconnected" id=a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403 namespace=moby
	Jul 17 17:39:13 ha-572000 dockerd[1183]: time="2024-07-17T17:39:13.527671021Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0544a7b38aa20       cbb01a7bd410d                                                                                         About a minute ago   Running             coredns                   1                   211b5a6515354       coredns-7db6d8ff4d-9dzd5
	2f15e40a181ae       53c535741fb44                                                                                         About a minute ago   Running             kube-proxy                1                   4aab8735c2c04       kube-proxy-hst7h
	a5d6b6937bc80       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   24dc28c9171d4       busybox-fc5497c4f-5r4wl
	90d12ecf2a207       5cc3abe5717db                                                                                         About a minute ago   Running             kindnet-cni               1                   c4ad8ae388e4c       kindnet-t85bv
	a82cf6255e5a9       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   be6e24303245d       storage-provisioner
	22dbe2e88f6f6       cbb01a7bd410d                                                                                         About a minute ago   Running             coredns                   1                   ebfbe4a086eb8       coredns-7db6d8ff4d-2phrp
	d0c5e4f0005b0       e874818b3caac                                                                                         About a minute ago   Running             kube-controller-manager   6                   3143df977771c       kube-controller-manager-ha-572000
	2988c5a570cb1       38af8ddebf499                                                                                         2 minutes ago        Running             kube-vip                  1                   bb35c323d1311       kube-vip-ha-572000
	b589feb3cd968       7820c83aa1394                                                                                         2 minutes ago        Running             kube-scheduler            2                   1f36c956df9c2       kube-scheduler-ha-572000
	c4604d37a9454       3861cfcd7c04c                                                                                         2 minutes ago        Running             etcd                      3                   73d23719d576c       etcd-ha-572000
	490b99a8cd7e0       56ce0fd9fb532                                                                                         2 minutes ago        Running             kube-apiserver            6                   43743c72743dc       kube-apiserver-ha-572000
	caed8fc7c24d9       e874818b3caac                                                                                         2 minutes ago        Exited              kube-controller-manager   5                   3143df977771c       kube-controller-manager-ha-572000
	cd333393aa057       56ce0fd9fb532                                                                                         3 minutes ago        Exited              kube-apiserver            5                   6d7eb0e874999       kube-apiserver-ha-572000
	b6b4ce34842d6       3861cfcd7c04c                                                                                         3 minutes ago        Exited              etcd                      2                   986ceb5a6f870       etcd-ha-572000
	138bf6784d59c       38af8ddebf499                                                                                         7 minutes ago        Exited              kube-vip                  0                   df04438a4c5cc       kube-vip-ha-572000
	a53f8fcdf5d97       7820c83aa1394                                                                                         7 minutes ago        Exited              kube-scheduler            1                   bfd880612991e       kube-scheduler-ha-572000
	e1a5eb1bed550       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   10 minutes ago       Exited              busybox                   0                   29ab413131af2       busybox-fc5497c4f-5r4wl
	bb44d784bb7ab       cbb01a7bd410d                                                                                         12 minutes ago       Exited              coredns                   0                   b8c622f08395f       coredns-7db6d8ff4d-2phrp
	7b275812468c9       cbb01a7bd410d                                                                                         12 minutes ago       Exited              coredns                   0                   2588bd7c40c23       coredns-7db6d8ff4d-9dzd5
	6e40e1427ab20       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              12 minutes ago       Exited              kindnet-cni               0                   32dc836c0a2df       kindnet-t85bv
	2aeed19835352       53c535741fb44                                                                                         12 minutes ago       Exited              kube-proxy                0                   f688e08d591be       kube-proxy-hst7h
	
	
	==> coredns [0544a7b38aa2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47730 - 44649 "HINFO IN 7657991150461714427.6847867729784937660. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009507113s
	
	
	==> coredns [22dbe2e88f6f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50584 - 51756 "HINFO IN 3888167032918365436.646455749640363721. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.007934252s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1469986290]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 17:38:43.514) (total time: 30002ms):
	Trace[1469986290]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:39:13.516)
	Trace[1469986290]: [30.002760682s] [30.002760682s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1457962466]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 17:38:43.515) (total time: 30001ms):
	Trace[1457962466]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:39:13.516)
	Trace[1457962466]: [30.001713432s] [30.001713432s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[94258701]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 17:38:43.514) (total time: 30003ms):
	Trace[94258701]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:39:13.516)
	Trace[94258701]: [30.003582814s] [30.003582814s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [7b275812468c] <==
	[INFO] 10.244.0.4:49035 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001628883s
	[INFO] 10.244.0.4:59665 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081818s
	[INFO] 10.244.0.4:52274 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077378s
	[INFO] 10.244.1.2:47113 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103907s
	[INFO] 10.244.2.2:57796 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060561s
	[INFO] 10.244.2.2:40339 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000425199s
	[INFO] 10.244.2.2:38339 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010468s
	[INFO] 10.244.2.2:50750 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070547s
	[INFO] 10.244.0.4:57426 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000045039s
	[INFO] 10.244.0.4:44283 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031195s
	[INFO] 10.244.1.2:56783 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117359s
	[INFO] 10.244.1.2:34315 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061358s
	[INFO] 10.244.1.2:55792 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062223s
	[INFO] 10.244.2.2:35768 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086426s
	[INFO] 10.244.2.2:42473 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102022s
	[INFO] 10.244.0.4:53524 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000087177s
	[INFO] 10.244.0.4:35224 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079092s
	[INFO] 10.244.0.4:53020 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075585s
	[INFO] 10.244.1.2:58074 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138293s
	[INFO] 10.244.1.2:40567 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000088337s
	[INFO] 10.244.1.2:46301 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000065328s
	[INFO] 10.244.1.2:59898 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075157s
	[INFO] 10.244.2.2:48754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000081888s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bb44d784bb7a] <==
	[INFO] 10.244.0.4:54052 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001019206s
	[INFO] 10.244.0.4:45412 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077905s
	[INFO] 10.244.0.4:37924 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159382s
	[INFO] 10.244.1.2:45862 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108196s
	[INFO] 10.244.1.2:47464 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000616154s
	[INFO] 10.244.1.2:53720 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059338s
	[INFO] 10.244.1.2:49774 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123053s
	[INFO] 10.244.1.2:54472 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000097287s
	[INFO] 10.244.1.2:52054 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064095s
	[INFO] 10.244.1.2:54428 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064142s
	[INFO] 10.244.2.2:53088 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109393s
	[INFO] 10.244.2.2:49993 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000114279s
	[INFO] 10.244.2.2:42708 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063661s
	[INFO] 10.244.2.2:57150 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000043434s
	[INFO] 10.244.0.4:45987 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112397s
	[INFO] 10.244.0.4:44116 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057797s
	[INFO] 10.244.1.2:37635 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105428s
	[INFO] 10.244.2.2:52081 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131248s
	[INFO] 10.244.2.2:54912 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087858s
	[INFO] 10.244.0.4:53981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079367s
	[INFO] 10.244.2.2:45850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152036s
	[INFO] 10.244.2.2:36004 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000223s
	[INFO] 10.244.2.2:54459 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000128834s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-572000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-572000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-572000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T10_27_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:27:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-572000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:40:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:38:16 +0000   Wed, 17 Jul 2024 17:27:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:38:16 +0000   Wed, 17 Jul 2024 17:27:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:38:16 +0000   Wed, 17 Jul 2024 17:27:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:38:16 +0000   Wed, 17 Jul 2024 17:27:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-572000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc4828ff3a4b410d87d0a2c48b8c546d
	  System UUID:                5f264258-0000-0000-9840-7856c1bd4173
	  Boot ID:                    2568bff2-eded-45b6-850c-4c0e9d36f966
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5r4wl              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-2phrp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 coredns-7db6d8ff4d-9dzd5             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 etcd-ha-572000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-t85bv                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-572000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-572000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-hst7h                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-572000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-572000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 97s                    kube-proxy       
	  Normal  Starting                 12m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                    kubelet          Node ha-572000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                    kubelet          Node ha-572000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                    kubelet          Node ha-572000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  NodeReady                12m                    kubelet          Node ha-572000 status is now: NodeReady
	  Normal  RegisteredNode           11m                    node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  RegisteredNode           8m15s                  node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  NodeHasSufficientMemory  2m43s (x8 over 2m43s)  kubelet          Node ha-572000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m43s (x8 over 2m43s)  kubelet          Node ha-572000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m43s (x7 over 2m43s)  kubelet          Node ha-572000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m2s                   node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  RegisteredNode           101s                   node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  RegisteredNode           95s                    node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	
	
	Name:               ha-572000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-572000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-572000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T10_28_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:28:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-572000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:40:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:38:08 +0000   Wed, 17 Jul 2024 17:28:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:38:08 +0000   Wed, 17 Jul 2024 17:28:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:38:08 +0000   Wed, 17 Jul 2024 17:28:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:38:08 +0000   Wed, 17 Jul 2024 17:28:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-572000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 21a94638d6914aaeb48a6d7a895c9b99
	  System UUID:                b5da4916-0000-0000-aec8-9a96c30c8c05
	  Boot ID:                    d3f575b3-f9f0-45ee-bee7-6209fb3d26a4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9sdw5                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-572000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-g2m92                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-572000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-572000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-v6jxh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-572000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-572000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m8s                   kube-proxy       
	  Normal   Starting                 8m28s                  kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-572000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-572000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node ha-572000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                    node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   Starting                 8m31s                  kubelet          Starting kubelet.
	  Warning  Rebooted                 8m31s                  kubelet          Node ha-572000-m02 has been rebooted, boot id: 7661c0d0-1379-4b0e-b101-3961fae1a207
	  Normal   NodeHasSufficientPID     8m31s                  kubelet          Node ha-572000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  8m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m31s                  kubelet          Node ha-572000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m31s                  kubelet          Node ha-572000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           8m15s                  node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   Starting                 2m25s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m24s (x8 over 2m25s)  kubelet          Node ha-572000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m24s (x8 over 2m25s)  kubelet          Node ha-572000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m24s (x7 over 2m25s)  kubelet          Node ha-572000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m2s                   node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   RegisteredNode           101s                   node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   RegisteredNode           95s                    node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	
	
	Name:               ha-572000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-572000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-572000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T10_29_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:29:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-572000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:40:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:38:31 +0000   Wed, 17 Jul 2024 17:29:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:38:31 +0000   Wed, 17 Jul 2024 17:29:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:38:31 +0000   Wed, 17 Jul 2024 17:29:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:38:31 +0000   Wed, 17 Jul 2024 17:30:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-572000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 be52acddd53148cc8c17d6c21c17abf3
	  System UUID:                50644be4-0000-0000-8d75-15b09204e5f5
	  Boot ID:                    f2b4f1ba-89fc-4cd2-a163-0306f2bae7bb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jhz2d                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-572000-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-72zfp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-572000-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-572000-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-h7k9z                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-572000-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-572000-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 108s               kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-572000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-572000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-572000-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           8m15s              node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           2m2s               node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   Starting                 111s               kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  111s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  111s               kubelet          Node ha-572000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    111s               kubelet          Node ha-572000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     111s               kubelet          Node ha-572000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 111s               kubelet          Node ha-572000-m03 has been rebooted, boot id: f2b4f1ba-89fc-4cd2-a163-0306f2bae7bb
	  Normal   RegisteredNode           101s               node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           95s                node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	
	
	Name:               ha-572000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-572000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-572000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T10_30_40_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:30:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-572000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:40:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:40:07 +0000   Wed, 17 Jul 2024 17:40:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:40:07 +0000   Wed, 17 Jul 2024 17:40:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:40:07 +0000   Wed, 17 Jul 2024 17:40:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:40:07 +0000   Wed, 17 Jul 2024 17:40:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-572000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a064c491460940e4967dc27f529a5ea6
	  System UUID:                d62b4091-0000-0000-a1f9-ae55052b3d93
	  Boot ID:                    9c875bb7-4ccf-49df-b662-ce64a8634436
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5xsrp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m43s
	  kube-system                 kube-proxy-5wcph    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m36s                  kube-proxy       
	  Normal   Starting                 13s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  9m43s (x2 over 9m43s)  kubelet          Node ha-572000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m43s (x2 over 9m43s)  kubelet          Node ha-572000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m43s (x2 over 9m43s)  kubelet          Node ha-572000-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m42s                  node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   NodeAllocatableEnforced  9m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m41s                  node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   RegisteredNode           9m40s                  node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   NodeReady                9m20s                  kubelet          Node ha-572000-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m15s                  node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   RegisteredNode           2m2s                   node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   RegisteredNode           101s                   node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   RegisteredNode           95s                    node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   NodeNotReady             82s                    node-controller  Node ha-572000-m04 status is now: NodeNotReady
	  Normal   Starting                 15s                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15s (x2 over 15s)      kubelet          Node ha-572000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x2 over 15s)      kubelet          Node ha-572000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x2 over 15s)      kubelet          Node ha-572000-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 15s                    kubelet          Node ha-572000-m04 has been rebooted, boot id: 9c875bb7-4ccf-49df-b662-ce64a8634436
	  Normal   NodeReady                15s                    kubelet          Node ha-572000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.035701] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007982] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.369068] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006691] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.635959] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.223787] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.844039] systemd-fstab-generator[468]: Ignoring "noauto" option for root device
	[  +0.100018] systemd-fstab-generator[480]: Ignoring "noauto" option for root device
	[  +1.895052] systemd-fstab-generator[1104]: Ignoring "noauto" option for root device
	[  +0.053692] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.194931] systemd-fstab-generator[1141]: Ignoring "noauto" option for root device
	[  +0.116874] systemd-fstab-generator[1153]: Ignoring "noauto" option for root device
	[  +0.104796] systemd-fstab-generator[1167]: Ignoring "noauto" option for root device
	[  +2.435008] systemd-fstab-generator[1384]: Ignoring "noauto" option for root device
	[  +0.114297] systemd-fstab-generator[1396]: Ignoring "noauto" option for root device
	[  +0.106280] systemd-fstab-generator[1408]: Ignoring "noauto" option for root device
	[  +0.119247] systemd-fstab-generator[1423]: Ignoring "noauto" option for root device
	[  +0.407183] systemd-fstab-generator[1582]: Ignoring "noauto" option for root device
	[  +6.782353] kauditd_printk_skb: 234 callbacks suppressed
	[Jul17 17:38] kauditd_printk_skb: 40 callbacks suppressed
	[ +35.726193] kauditd_printk_skb: 25 callbacks suppressed
	[Jul17 17:39] kauditd_printk_skb: 50 callbacks suppressed
	
	
	==> etcd [b6b4ce34842d] <==
	{"level":"info","ts":"2024-07-17T17:37:06.183089Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-07-17T17:37:07.625159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:07.625381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:07.625508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:07.625789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:07.626021Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:09.125694Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:09.125798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:09.125845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:09.12599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:09.12619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:10.625744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:10.62582Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:10.625858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:10.625915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:10.625982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"warn","ts":"2024-07-17T17:37:11.167194Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f6a6d94fe6ea5bc8","rtt":"0s"}
	{"level":"warn","ts":"2024-07-17T17:37:11.167486Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f6a6d94fe6ea5bc8","rtt":"0s"}
	{"level":"warn","ts":"2024-07-17T17:37:11.185338Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1d3f36ee75516151","rtt":"0s"}
	{"level":"warn","ts":"2024-07-17T17:37:11.185403Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1d3f36ee75516151","rtt":"0s"}
	{"level":"info","ts":"2024-07-17T17:37:12.128113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:12.128317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:12.128469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:12.128888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:12.129376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	
	
	==> etcd [c4604d37a945] <==
	{"level":"warn","ts":"2024-07-17T17:38:22.257766Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:22.317122Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:22.320897Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:22.322427Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:22.357867Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:22.457051Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:23.802501Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"1d3f36ee75516151","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:38:23.802583Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1d3f36ee75516151","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:38:26.684167Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1d3f36ee75516151","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:38:26.684258Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1d3f36ee75516151","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:38:27.804044Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"1d3f36ee75516151","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:38:27.804236Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1d3f36ee75516151","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-17T17:38:30.560424Z","caller":"traceutil/trace.go:171","msg":"trace[1448908815] transaction","detail":"{read_only:false; response_revision:1848; number_of_response:1; }","duration":"129.153763ms","start":"2024-07-17T17:38:30.431252Z","end":"2024-07-17T17:38:30.560406Z","steps":["trace[1448908815] 'process raft request'  (duration: 107.083433ms)","trace[1448908815] 'compare'  (duration: 21.91661ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T17:38:30.59812Z","caller":"traceutil/trace.go:171","msg":"trace[2102820773] transaction","detail":"{read_only:false; response_revision:1849; number_of_response:1; }","duration":"165.419706ms","start":"2024-07-17T17:38:30.432685Z","end":"2024-07-17T17:38:30.598105Z","steps":["trace[2102820773] 'process raft request'  (duration: 165.353536ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:38:31.684736Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1d3f36ee75516151","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:38:31.685061Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1d3f36ee75516151","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:38:31.806282Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"1d3f36ee75516151","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:38:31.80678Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1d3f36ee75516151","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-17T17:38:32.609183Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:38:32.616715Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:38:32.617138Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:38:32.619682Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"1d3f36ee75516151","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-17T17:38:32.619894Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151"}
	{"level":"info","ts":"2024-07-17T17:38:32.624292Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"1d3f36ee75516151","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-17T17:38:32.625462Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151"}
	
	
	==> kernel <==
	 17:40:23 up 3 min,  0 users,  load average: 0.15, 0.08, 0.03
	Linux ha-572000 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6e40e1427ab2] <==
	I0717 17:31:56.892269       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:06.898213       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:06.898253       1 main.go:303] handling current node
	I0717 17:32:06.898264       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:06.898269       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:06.898416       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:06.898443       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:06.898526       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:06.898555       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:16.896377       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:16.896415       1 main.go:303] handling current node
	I0717 17:32:16.896426       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:16.896432       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:16.896606       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:16.896636       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:16.896674       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:16.896699       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:26.896557       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:26.896622       1 main.go:303] handling current node
	I0717 17:32:26.896678       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:26.896718       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:26.896938       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:26.897017       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:26.897158       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:26.897880       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [90d12ecf2a20] <==
	I0717 17:39:45.427615       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:39:55.431585       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:39:55.431619       1 main.go:303] handling current node
	I0717 17:39:55.431633       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:39:55.431639       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:39:55.431782       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:39:55.431791       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:39:55.431847       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:39:55.431854       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:40:05.434801       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:40:05.434852       1 main.go:303] handling current node
	I0717 17:40:05.434866       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:40:05.434873       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:40:05.435156       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:40:05.435194       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:40:05.435277       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:40:05.435363       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:40:15.426184       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:40:15.426228       1 main.go:303] handling current node
	I0717 17:40:15.426238       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:40:15.426243       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:40:15.426375       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:40:15.426402       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:40:15.426512       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:40:15.426539       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [490b99a8cd7e] <==
	I0717 17:38:06.692598       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0717 17:38:06.695172       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 17:38:06.753691       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 17:38:06.754495       1 policy_source.go:224] refreshing policies
	I0717 17:38:06.761461       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 17:38:06.775946       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 17:38:06.777937       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 17:38:06.777967       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 17:38:06.785861       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 17:38:06.785861       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 17:38:06.789965       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 17:38:06.785881       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 17:38:06.790098       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 17:38:06.790136       1 aggregator.go:165] initial CRD sync complete...
	I0717 17:38:06.790141       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 17:38:06.790145       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 17:38:06.790148       1 cache.go:39] Caches are synced for autoregister controller
	W0717 17:38:06.822673       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.7]
	I0717 17:38:06.824170       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 17:38:06.847080       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 17:38:06.894480       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0717 17:38:06.899931       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0717 17:38:07.685599       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0717 17:38:07.910228       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.7]
	W0717 17:38:27.915985       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.6]
	
	
	==> kube-apiserver [cd333393aa05] <==
	I0717 17:37:11.795742       1 options.go:221] external host was not specified, using 192.169.0.5
	I0717 17:37:11.796641       1 server.go:148] Version: v1.30.2
	I0717 17:37:11.796774       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:37:12.098000       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0717 17:37:12.100463       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 17:37:12.102906       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0717 17:37:12.102927       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0717 17:37:12.103040       1 instance.go:299] Using reconciler: lease
	W0717 17:37:13.058091       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:59336->127.0.0.1:2379: read: connection reset by peer"
	W0717 17:37:13.058287       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:59310->127.0.0.1:2379: read: connection reset by peer"
	W0717 17:37:13.058569       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:59320->127.0.0.1:2379: read: connection reset by peer"
	
	
	==> kube-controller-manager [caed8fc7c24d] <==
	I0717 17:37:47.127601       1 serving.go:380] Generated self-signed cert in-memory
	I0717 17:37:47.646900       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0717 17:37:47.646935       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:37:47.649809       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0717 17:37:47.649838       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 17:37:47.650220       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 17:37:47.649847       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0717 17:38:07.655360       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-n
amespaces-controller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [d0c5e4f0005b] <==
	I0717 17:38:41.355830       1 shared_informer.go:320] Caches are synced for PVC protection
	I0717 17:38:41.359928       1 shared_informer.go:320] Caches are synced for GC
	I0717 17:38:41.362350       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0717 17:38:41.364853       1 shared_informer.go:320] Caches are synced for ephemeral
	I0717 17:38:41.366792       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0717 17:38:41.424626       1 shared_informer.go:320] Caches are synced for cronjob
	I0717 17:38:41.432004       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0717 17:38:41.511531       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 17:38:41.518940       1 shared_informer.go:320] Caches are synced for daemon sets
	I0717 17:38:41.541830       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 17:38:41.550619       1 shared_informer.go:320] Caches are synced for stateful set
	I0717 17:38:41.975157       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 17:38:41.982462       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 17:38:41.982520       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 17:38:43.635302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.818µs"
	I0717 17:38:44.733712       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.810534ms"
	I0717 17:38:44.734043       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.445µs"
	I0717 17:38:45.721419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.76µs"
	I0717 17:38:45.768611       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-9v69m EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-9v69m\": the object has been modified; please apply your changes to the latest version and try again"
	I0717 17:38:45.771754       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"7c540b68-a08e-44ac-9c69-ea596263c8eb", APIVersion:"v1", ResourceVersion:"260", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-9v69m EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-9v69m": the object has been modified; please apply your changes to the latest version and try again
	I0717 17:38:45.781131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.861246ms"
	I0717 17:38:45.781831       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.528µs"
	I0717 17:39:19.551280       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.494894ms"
	I0717 17:39:19.551568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="81.124µs"
	I0717 17:40:07.684329       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-572000-m04"
	
	
	==> kube-proxy [2aeed1983535] <==
	I0717 17:27:43.315695       1 server_linux.go:69] "Using iptables proxy"
	I0717 17:27:43.322673       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0717 17:27:43.354011       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:27:43.354032       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:27:43.354044       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:27:43.355997       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:27:43.356216       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:27:43.356225       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:27:43.356874       1 config.go:192] "Starting service config controller"
	I0717 17:27:43.356903       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:27:43.356921       1 config.go:319] "Starting node config controller"
	I0717 17:27:43.356943       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:27:43.357077       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:27:43.357144       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:27:43.457513       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 17:27:43.457607       1 shared_informer.go:320] Caches are synced for node config
	I0717 17:27:43.457639       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [2f15e40a181a] <==
	I0717 17:38:44.762819       1 server_linux.go:69] "Using iptables proxy"
	I0717 17:38:44.783856       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0717 17:38:44.830838       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:38:44.830870       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:38:44.830884       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:38:44.834309       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:38:44.834864       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:38:44.834894       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:38:44.836964       1 config.go:192] "Starting service config controller"
	I0717 17:38:44.837593       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:38:44.837672       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:38:44.837678       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:38:44.839841       1 config.go:319] "Starting node config controller"
	I0717 17:38:44.839870       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:38:44.938549       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 17:38:44.938751       1 shared_informer.go:320] Caches are synced for service config
	I0717 17:38:44.940510       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a53f8fcdf5d9] <==
	E0717 17:36:41.264926       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:36:42.998657       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:36:42.998862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:36:43.326673       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:36:43.327166       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:36:45.184656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:36:45.185412       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:36:52.182490       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:36:52.182723       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:37:00.423142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:00.423274       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:37:01.259659       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:01.260400       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:37:02.377758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:02.378082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:37:08.932628       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:08.932761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:37:09.428412       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:09.428505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:13.065507       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0717 17:37:13.067197       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0717 17:37:13.067371       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0717 17:37:13.067559       1 shared_informer.go:316] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 17:37:13.067604       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0717 17:37:13.067950       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b589feb3cd96] <==
	I0717 17:37:47.052011       1 serving.go:380] Generated self-signed cert in-memory
	W0717 17:37:57.430329       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0717 17:37:57.430356       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 17:37:57.430361       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 17:38:06.715078       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 17:38:06.715131       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:38:06.719828       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 17:38:06.720025       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 17:38:06.720059       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 17:38:06.720073       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 17:38:06.820740       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 17:38:39 ha-572000 kubelet[1589]: I0717 17:38:39.302284    1589 scope.go:117] "RemoveContainer" containerID="9200160f355ce6c552f980f7ed46283a5abfcee202d68ed4d026b62b5f09378f"
	Jul 17 17:38:43 ha-572000 kubelet[1589]: I0717 17:38:43.248499    1589 scope.go:117] "RemoveContainer" containerID="bb44d784bb7ab822072739958ae678f3a02d43caf6fe9538c0f06ebef18ea342"
	Jul 17 17:38:43 ha-572000 kubelet[1589]: I0717 17:38:43.249450    1589 scope.go:117] "RemoveContainer" containerID="12ba2e181ee9ae3666a5ca0e759c24d2ccb54439a79a38efff74cf14a40e784a"
	Jul 17 17:38:44 ha-572000 kubelet[1589]: I0717 17:38:44.247542    1589 scope.go:117] "RemoveContainer" containerID="6e40e1427ab20e20a4e59edefca31cfa827b45b6f6b76ae115559d4affa80801"
	Jul 17 17:38:44 ha-572000 kubelet[1589]: I0717 17:38:44.247737    1589 scope.go:117] "RemoveContainer" containerID="2aeed19835352538242328918de029a46e7a1c2c0337d634b785ef7be5db5332"
	Jul 17 17:38:44 ha-572000 kubelet[1589]: I0717 17:38:44.248176    1589 scope.go:117] "RemoveContainer" containerID="e1a5eb1bed550849fe01b413e967df27558ab752f138608980b41a250955e5cb"
	Jul 17 17:38:45 ha-572000 kubelet[1589]: I0717 17:38:45.248442    1589 scope.go:117] "RemoveContainer" containerID="7b275812468c9bd27f22db306363aca5bc7fa0141fc09681bf430d6ef78fe048"
	Jul 17 17:39:13 ha-572000 kubelet[1589]: I0717 17:39:13.938023    1589 scope.go:117] "RemoveContainer" containerID="12ba2e181ee9ae3666a5ca0e759c24d2ccb54439a79a38efff74cf14a40e784a"
	Jul 17 17:39:13 ha-572000 kubelet[1589]: I0717 17:39:13.938223    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:39:13 ha-572000 kubelet[1589]: E0717 17:39:13.938325    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	Jul 17 17:39:28 ha-572000 kubelet[1589]: I0717 17:39:28.248196    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:39:28 ha-572000 kubelet[1589]: E0717 17:39:28.248343    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	Jul 17 17:39:39 ha-572000 kubelet[1589]: E0717 17:39:39.270524    1589 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:39:39 ha-572000 kubelet[1589]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:39:39 ha-572000 kubelet[1589]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:39:39 ha-572000 kubelet[1589]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:39:39 ha-572000 kubelet[1589]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:39:43 ha-572000 kubelet[1589]: I0717 17:39:43.248697    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:39:43 ha-572000 kubelet[1589]: E0717 17:39:43.249374    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	Jul 17 17:39:54 ha-572000 kubelet[1589]: I0717 17:39:54.247534    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:39:54 ha-572000 kubelet[1589]: E0717 17:39:54.248369    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	Jul 17 17:40:07 ha-572000 kubelet[1589]: I0717 17:40:07.247771    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:40:07 ha-572000 kubelet[1589]: E0717 17:40:07.248147    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	Jul 17 17:40:22 ha-572000 kubelet[1589]: I0717 17:40:22.247319    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:40:22 ha-572000 kubelet[1589]: E0717 17:40:22.247457    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-572000 -n ha-572000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-572000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (6.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-572000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-572000 --control-plane -v=7 --alsologtostderr: (1m16.952302883s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr
ha_test.go:616: status says not all three control-plane nodes are present: args "out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr": ha-572000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-572000-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:619: status says not all four hosts are running: args "out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr": ha-572000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-572000-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:622: status says not all four kubelets are running: args "out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr": ha-572000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-572000-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:625: status says not all three apiservers are running: args "out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr": ha-572000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-572000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-572000-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-572000 -n ha-572000
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-572000 logs -n 25: (3.648414057s)
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m04 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m03_ha-572000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-572000 cp testdata/cp-test.txt                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3497274976/001/cp-test_ha-572000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000:/home/docker/cp-test_ha-572000-m04_ha-572000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000 sudo cat                                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m02:/home/docker/cp-test_ha-572000-m04_ha-572000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m02 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m03:/home/docker/cp-test_ha-572000-m04_ha-572000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m03 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-572000 node stop m02 -v=7                                                                                                 | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-572000 node start m02 -v=7                                                                                                | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:32 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-572000 -v=7                                                                                                       | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-572000 -v=7                                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT | 17 Jul 24 10:32 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-572000 --wait=true -v=7                                                                                                | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-572000                                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:34 PDT |                     |
	| node    | ha-572000 node delete m03 -v=7                                                                                               | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:34 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-572000 stop -v=7                                                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:34 PDT | 17 Jul 24 10:37 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-572000 --wait=true                                                                                                     | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:37 PDT | 17 Jul 24 10:40 PDT |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	| node    | add -p ha-572000                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:40 PDT | 17 Jul 24 10:41 PDT |
	|         | --control-plane -v=7                                                                                                         |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 10:37:21
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 10:37:21.160279    3636 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:37:21.160444    3636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:37:21.160449    3636 out.go:304] Setting ErrFile to fd 2...
	I0717 10:37:21.160453    3636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:37:21.160640    3636 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	I0717 10:37:21.162037    3636 out.go:298] Setting JSON to false
	I0717 10:37:21.184380    3636 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2212,"bootTime":1721235629,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0717 10:37:21.184474    3636 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:37:21.206845    3636 out.go:177] * [ha-572000] minikube v1.33.1 on Darwin 14.5
	I0717 10:37:21.250316    3636 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:37:21.250374    3636 notify.go:220] Checking for updates...
	I0717 10:37:21.294243    3636 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:37:21.315083    3636 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 10:37:21.336268    3636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:37:21.357529    3636 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	I0717 10:37:21.379368    3636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:37:21.401138    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:21.401903    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:21.401985    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:21.411459    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51932
	I0717 10:37:21.411825    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:21.412241    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:21.412256    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:21.412501    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:21.412634    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:21.412826    3636 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:37:21.413099    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:21.413120    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:21.421537    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51934
	I0717 10:37:21.421880    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:21.422209    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:21.422224    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:21.422446    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:21.422563    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:21.451265    3636 out.go:177] * Using the hyperkit driver based on existing profile
	I0717 10:37:21.493400    3636 start.go:297] selected driver: hyperkit
	I0717 10:37:21.493425    3636 start.go:901] validating driver "hyperkit" against &{Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclas
s:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:37:21.493682    3636 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:37:21.493865    3636 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:37:21.494086    3636 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19283-1099/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0717 10:37:21.503763    3636 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0717 10:37:21.507648    3636 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:21.507668    3636 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0717 10:37:21.510386    3636 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:37:21.510420    3636 cni.go:84] Creating CNI manager for ""
	I0717 10:37:21.510429    3636 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 10:37:21.510503    3636 start.go:340] cluster config:
	{Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:37:21.510603    3636 iso.go:125] acquiring lock: {Name:mkf51f842bcc8a77e9c7c50d642c4c76848e96af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:37:21.554326    3636 out.go:177] * Starting "ha-572000" primary control-plane node in "ha-572000" cluster
	I0717 10:37:21.575453    3636 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:37:21.575524    3636 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0717 10:37:21.575584    3636 cache.go:56] Caching tarball of preloaded images
	I0717 10:37:21.575806    3636 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:37:21.575825    3636 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:37:21.576014    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:37:21.577007    3636 start.go:360] acquireMachinesLock for ha-572000: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:37:21.577135    3636 start.go:364] duration metric: took 100.667µs to acquireMachinesLock for "ha-572000"
	I0717 10:37:21.577166    3636 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:37:21.577183    3636 fix.go:54] fixHost starting: 
	I0717 10:37:21.577591    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:21.577617    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:21.586612    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51936
	I0717 10:37:21.586997    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:21.587342    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:21.587357    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:21.587563    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:21.587707    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:21.587805    3636 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:37:21.587906    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:21.587984    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3521
	I0717 10:37:21.588936    3636 fix.go:112] recreateIfNeeded on ha-572000: state=Stopped err=<nil>
	I0717 10:37:21.588955    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:21.588954    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 3521 missing from process table
	W0717 10:37:21.589054    3636 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:37:21.631187    3636 out.go:177] * Restarting existing hyperkit VM for "ha-572000" ...
	I0717 10:37:21.652411    3636 main.go:141] libmachine: (ha-572000) Calling .Start
	I0717 10:37:21.652671    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:21.652780    3636 main.go:141] libmachine: (ha-572000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid
	I0717 10:37:21.654451    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 3521 missing from process table
	I0717 10:37:21.654462    3636 main.go:141] libmachine: (ha-572000) DBG | pid 3521 is in state "Stopped"
	I0717 10:37:21.654497    3636 main.go:141] libmachine: (ha-572000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid...
	I0717 10:37:21.654867    3636 main.go:141] libmachine: (ha-572000) DBG | Using UUID 5f2666de-0b32-4258-9840-7856c1bd4173
	I0717 10:37:21.763705    3636 main.go:141] libmachine: (ha-572000) DBG | Generated MAC d2:a6:10:ad:80:98
	I0717 10:37:21.763739    3636 main.go:141] libmachine: (ha-572000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:37:21.763844    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f2666de-0b32-4258-9840-7856c1bd4173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8a80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:37:21.763875    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f2666de-0b32-4258-9840-7856c1bd4173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8a80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:37:21.763912    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5f2666de-0b32-4258-9840-7856c1bd4173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/ha-572000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:37:21.763957    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5f2666de-0b32-4258-9840-7856c1bd4173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/ha-572000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:37:21.763980    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:37:21.765595    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: Pid is 3650
	I0717 10:37:21.766010    3636 main.go:141] libmachine: (ha-572000) DBG | Attempt 0
	I0717 10:37:21.766020    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:21.766092    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3650
	I0717 10:37:21.767880    3636 main.go:141] libmachine: (ha-572000) DBG | Searching for d2:a6:10:ad:80:98 in /var/db/dhcpd_leases ...
	I0717 10:37:21.767940    3636 main.go:141] libmachine: (ha-572000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:37:21.767961    3636 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x669951d0}
	I0717 10:37:21.767972    3636 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669951be}
	I0717 10:37:21.767977    3636 main.go:141] libmachine: (ha-572000) DBG | Found match: d2:a6:10:ad:80:98
	I0717 10:37:21.767984    3636 main.go:141] libmachine: (ha-572000) DBG | IP: 192.169.0.5
	I0717 10:37:21.768041    3636 main.go:141] libmachine: (ha-572000) Calling .GetConfigRaw
	I0717 10:37:21.768653    3636 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:37:21.768835    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:37:21.769276    3636 machine.go:94] provisionDockerMachine start ...
	I0717 10:37:21.769288    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:21.769440    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:21.769559    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:21.769675    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:21.769782    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:21.769886    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:21.770036    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:21.770285    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:21.770298    3636 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:37:21.773346    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:37:21.825199    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:37:21.825892    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:37:21.825902    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:37:21.825909    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:37:21.825917    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:37:22.200252    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:37:22.200268    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:37:22.314927    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:37:22.314948    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:37:22.314982    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:37:22.314999    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:37:22.315852    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:37:22.315864    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:37:27.580528    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:27 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:37:27.580565    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:27 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:37:27.580573    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:27 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:37:27.604198    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:27 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:37:32.830003    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:37:32.830021    3636 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:37:32.830158    3636 buildroot.go:166] provisioning hostname "ha-572000"
	I0717 10:37:32.830170    3636 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:37:32.830268    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:32.830359    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:32.830451    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:32.830548    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:32.830646    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:32.830800    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:32.830958    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:32.830967    3636 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000 && echo "ha-572000" | sudo tee /etc/hostname
	I0717 10:37:32.892396    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000
	
	I0717 10:37:32.892414    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:32.892535    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:32.892617    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:32.892697    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:32.892768    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:32.892926    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:32.893069    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:32.893080    3636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:37:32.952066    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:37:32.952086    3636 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:37:32.952098    3636 buildroot.go:174] setting up certificates
	I0717 10:37:32.952109    3636 provision.go:84] configureAuth start
	I0717 10:37:32.952116    3636 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:37:32.952255    3636 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:37:32.952365    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:32.952464    3636 provision.go:143] copyHostCerts
	I0717 10:37:32.952503    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:37:32.952585    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:37:32.952594    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:37:32.952749    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:37:32.952965    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:37:32.953012    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:37:32.953018    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:37:32.953117    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:37:32.953281    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:37:32.953328    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:37:32.953333    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:37:32.953420    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:37:32.953574    3636 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000 san=[127.0.0.1 192.169.0.5 ha-572000 localhost minikube]
	I0717 10:37:33.013099    3636 provision.go:177] copyRemoteCerts
	I0717 10:37:33.013145    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:37:33.013161    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:33.013272    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:33.013371    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.013543    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:33.013682    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:33.045521    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:37:33.045593    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:37:33.064633    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:37:33.064699    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0717 10:37:33.084163    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:37:33.084229    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:37:33.103388    3636 provision.go:87] duration metric: took 151.262739ms to configureAuth
	I0717 10:37:33.103401    3636 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:37:33.103573    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:33.103587    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:33.103711    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:33.103809    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:33.103896    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.103977    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.104077    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:33.104181    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:33.104316    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:33.104324    3636 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:37:33.156434    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:37:33.156448    3636 buildroot.go:70] root file system type: tmpfs
	I0717 10:37:33.156525    3636 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:37:33.156537    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:33.156662    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:33.156743    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.156842    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.156931    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:33.157047    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:33.157186    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:33.157233    3636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:37:33.218680    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:37:33.218702    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:33.218866    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:33.218955    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.219056    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.219143    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:33.219283    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:33.219430    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:33.219443    3636 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:37:34.829521    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:37:34.829537    3636 machine.go:97] duration metric: took 13.059920588s to provisionDockerMachine
	I0717 10:37:34.829550    3636 start.go:293] postStartSetup for "ha-572000" (driver="hyperkit")
	I0717 10:37:34.829558    3636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:37:34.829569    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:34.829747    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:37:34.829763    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:34.829864    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:34.829977    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:34.830076    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:34.830154    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:34.863781    3636 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:37:34.867753    3636 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:37:34.867768    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:37:34.867875    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:37:34.868074    3636 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:37:34.868081    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:37:34.868294    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:37:34.881801    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:37:34.912172    3636 start.go:296] duration metric: took 82.609841ms for postStartSetup
	I0717 10:37:34.912193    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:34.912376    3636 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:37:34.912397    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:34.912490    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:34.912588    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:34.912689    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:34.912778    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:34.946140    3636 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:37:34.946199    3636 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:37:34.999470    3636 fix.go:56] duration metric: took 13.421948957s for fixHost
	I0717 10:37:34.999494    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:34.999648    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:34.999748    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:34.999850    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:34.999944    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:35.000069    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:35.000221    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:35.000229    3636 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:37:35.051085    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237854.922867132
	
	I0717 10:37:35.051099    3636 fix.go:216] guest clock: 1721237854.922867132
	I0717 10:37:35.051112    3636 fix.go:229] Guest: 2024-07-17 10:37:34.922867132 -0700 PDT Remote: 2024-07-17 10:37:34.999482 -0700 PDT m=+13.873438456 (delta=-76.614868ms)
	I0717 10:37:35.051130    3636 fix.go:200] guest clock delta is within tolerance: -76.614868ms
	I0717 10:37:35.051134    3636 start.go:83] releasing machines lock for "ha-572000", held for 13.473647062s
	I0717 10:37:35.051154    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:35.051301    3636 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:37:35.051418    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:35.051739    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:35.051853    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:35.051967    3636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:37:35.051989    3636 ssh_runner.go:195] Run: cat /version.json
	I0717 10:37:35.051998    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:35.052000    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:35.052101    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:35.052120    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:35.052207    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:35.052223    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:35.052289    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:35.052308    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:35.052381    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:35.052403    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:35.080899    3636 ssh_runner.go:195] Run: systemctl --version
	I0717 10:37:35.132487    3636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 10:37:35.137302    3636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:37:35.137349    3636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:37:35.150408    3636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:37:35.150420    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:37:35.150523    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:37:35.166824    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:37:35.175726    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:37:35.184531    3636 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:37:35.184576    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:37:35.193352    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:37:35.202047    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:37:35.210925    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:37:35.219775    3636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:37:35.228824    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:37:35.237746    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:37:35.246520    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:37:35.255409    3636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:37:35.263547    3636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:37:35.271637    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:35.370819    3636 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:37:35.385762    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:37:35.385839    3636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:37:35.397460    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:37:35.408605    3636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:37:35.423025    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:37:35.433954    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:37:35.444983    3636 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:37:35.462789    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:37:35.474320    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:37:35.491905    3636 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:37:35.494848    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:37:35.502963    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:37:35.516602    3636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:37:35.626759    3636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:37:35.732422    3636 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:37:35.732511    3636 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:37:35.746415    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:35.837452    3636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:37:38.134243    3636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.296714656s)
	I0717 10:37:38.134309    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:37:38.145497    3636 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 10:37:38.159451    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:37:38.170560    3636 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:37:38.274400    3636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:37:38.385610    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:38.490247    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:37:38.502358    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:37:38.513179    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:38.610828    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:37:38.675050    3636 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:37:38.675129    3636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:37:38.679555    3636 start.go:563] Will wait 60s for crictl version
	I0717 10:37:38.679605    3636 ssh_runner.go:195] Run: which crictl
	I0717 10:37:38.682545    3636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:37:38.707789    3636 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:37:38.707873    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:37:38.724822    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:37:38.769236    3636 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:37:38.769287    3636 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:37:38.769657    3636 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:37:38.774296    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:37:38.784075    3636 kubeadm.go:883] updating cluster {Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 10:37:38.784175    3636 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:37:38.784231    3636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 10:37:38.798317    3636 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0717 10:37:38.798329    3636 docker.go:615] Images already preloaded, skipping extraction
	I0717 10:37:38.798398    3636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 10:37:38.810938    3636 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0717 10:37:38.810957    3636 cache_images.go:84] Images are preloaded, skipping loading
	I0717 10:37:38.810966    3636 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.30.2 docker true true} ...
	I0717 10:37:38.811048    3636 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:37:38.811115    3636 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 10:37:38.829256    3636 cni.go:84] Creating CNI manager for ""
	I0717 10:37:38.829269    3636 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 10:37:38.829280    3636 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 10:37:38.829295    3636 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-572000 NodeName:ha-572000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 10:37:38.829373    3636 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-572000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 10:37:38.829387    3636 kube-vip.go:115] generating kube-vip config ...
	I0717 10:37:38.829437    3636 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 10:37:38.842048    3636 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 10:37:38.842112    3636 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 10:37:38.842157    3636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:37:38.849945    3636 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:37:38.849994    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 10:37:38.857243    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0717 10:37:38.870596    3636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:37:38.883936    3636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0717 10:37:38.897367    3636 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0717 10:37:38.910809    3636 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0717 10:37:38.913705    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:37:38.922873    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:39.030583    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:37:39.043433    3636 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.5
	I0717 10:37:39.043445    3636 certs.go:194] generating shared ca certs ...
	I0717 10:37:39.043456    3636 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:37:39.043642    3636 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:37:39.043720    3636 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:37:39.043730    3636 certs.go:256] generating profile certs ...
	I0717 10:37:39.043839    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key
	I0717 10:37:39.043918    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7
	I0717 10:37:39.043992    3636 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key
	I0717 10:37:39.043999    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:37:39.044021    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:37:39.044039    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:37:39.044057    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:37:39.044074    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 10:37:39.044104    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 10:37:39.044133    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 10:37:39.044152    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 10:37:39.044248    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:37:39.044296    3636 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:37:39.044310    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:37:39.044353    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:37:39.044397    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:37:39.044448    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:37:39.044541    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:37:39.044586    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:39.044607    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:37:39.044626    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:37:39.045107    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:37:39.076893    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:37:39.102499    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:37:39.129749    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:37:39.155627    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 10:37:39.180179    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 10:37:39.210181    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 10:37:39.264808    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 10:37:39.318806    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:37:39.365954    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:37:39.390620    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:37:39.410051    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 10:37:39.423408    3636 ssh_runner.go:195] Run: openssl version
	I0717 10:37:39.427605    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:37:39.436575    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:39.439804    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:39.439837    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:39.443971    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:37:39.452794    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:37:39.461667    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:37:39.464961    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:37:39.465002    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:37:39.469065    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:37:39.477903    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:37:39.486816    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:37:39.490121    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:37:39.490162    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:37:39.494244    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:37:39.503378    3636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:37:39.506714    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 10:37:39.510953    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 10:37:39.515092    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 10:37:39.519272    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 10:37:39.523407    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 10:37:39.527554    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 10:37:39.531780    3636 kubeadm.go:392] StartCluster: {Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:37:39.531904    3636 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 10:37:39.544965    3636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 10:37:39.553126    3636 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 10:37:39.553138    3636 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 10:37:39.553178    3636 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 10:37:39.561206    3636 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 10:37:39.561518    3636 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-572000" does not appear in /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:37:39.561607    3636 kubeconfig.go:62] /Users/jenkins/minikube-integration/19283-1099/kubeconfig needs updating (will repair): [kubeconfig missing "ha-572000" cluster setting kubeconfig missing "ha-572000" context setting]
	I0717 10:37:39.561822    3636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:37:39.562469    3636 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:37:39.562674    3636 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa0a8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 10:37:39.562998    3636 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 10:37:39.563178    3636 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 10:37:39.570855    3636 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0717 10:37:39.570867    3636 kubeadm.go:597] duration metric: took 17.724744ms to restartPrimaryControlPlane
	I0717 10:37:39.570878    3636 kubeadm.go:394] duration metric: took 39.101036ms to StartCluster
	I0717 10:37:39.570889    3636 settings.go:142] acquiring lock: {Name:mkc45f011a907c66e2dbca7dadfff37ab48f7d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:37:39.570961    3636 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:37:39.571333    3636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:37:39.571564    3636 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:37:39.571579    3636 start.go:241] waiting for startup goroutines ...
	I0717 10:37:39.571583    3636 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 10:37:39.571709    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:39.622273    3636 out.go:177] * Enabled addons: 
	I0717 10:37:39.644517    3636 addons.go:510] duration metric: took 72.937257ms for enable addons: enabled=[]
	I0717 10:37:39.644554    3636 start.go:246] waiting for cluster config update ...
	I0717 10:37:39.644589    3636 start.go:255] writing updated cluster config ...
	I0717 10:37:39.667630    3636 out.go:177] 
	I0717 10:37:39.689827    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:39.689958    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:37:39.712261    3636 out.go:177] * Starting "ha-572000-m02" control-plane node in "ha-572000" cluster
	I0717 10:37:39.754151    3636 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:37:39.754211    3636 cache.go:56] Caching tarball of preloaded images
	I0717 10:37:39.754408    3636 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:37:39.754427    3636 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:37:39.754564    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:37:39.755532    3636 start.go:360] acquireMachinesLock for ha-572000-m02: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:37:39.755656    3636 start.go:364] duration metric: took 98.999µs to acquireMachinesLock for "ha-572000-m02"
	I0717 10:37:39.755680    3636 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:37:39.755687    3636 fix.go:54] fixHost starting: m02
	I0717 10:37:39.756121    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:39.756167    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:39.765321    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51958
	I0717 10:37:39.765669    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:39.765987    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:39.765996    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:39.766231    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:39.766367    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:39.766465    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetState
	I0717 10:37:39.766561    3636 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:39.766639    3636 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3526
	I0717 10:37:39.767558    3636 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3526 missing from process table
	I0717 10:37:39.767584    3636 fix.go:112] recreateIfNeeded on ha-572000-m02: state=Stopped err=<nil>
	I0717 10:37:39.767592    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	W0717 10:37:39.767681    3636 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:37:39.811253    3636 out.go:177] * Restarting existing hyperkit VM for "ha-572000-m02" ...
	I0717 10:37:39.832179    3636 main.go:141] libmachine: (ha-572000-m02) Calling .Start
	I0717 10:37:39.832337    3636 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:39.832362    3636 main.go:141] libmachine: (ha-572000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid
	I0717 10:37:39.833334    3636 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3526 missing from process table
	I0717 10:37:39.833343    3636 main.go:141] libmachine: (ha-572000-m02) DBG | pid 3526 is in state "Stopped"
	I0717 10:37:39.833355    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid...
	I0717 10:37:39.833536    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Using UUID b5da5881-83da-4916-aec8-9a96c30c8c05
	I0717 10:37:39.859749    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Generated MAC 2:60:33:0:68:8b
	I0717 10:37:39.859777    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:37:39.859978    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b5da5881-83da-4916-aec8-9a96c30c8c05", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c0ae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:37:39.860020    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b5da5881-83da-4916-aec8-9a96c30c8c05", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c0ae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:37:39.860096    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b5da5881-83da-4916-aec8-9a96c30c8c05", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/ha-572000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machine
s/ha-572000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:37:39.860169    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b5da5881-83da-4916-aec8-9a96c30c8c05 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/ha-572000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:37:39.860189    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:37:39.861788    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: Pid is 3657
	I0717 10:37:39.862251    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Attempt 0
	I0717 10:37:39.862268    3636 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:39.862355    3636 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3657
	I0717 10:37:39.864079    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Searching for 2:60:33:0:68:8b in /var/db/dhcpd_leases ...
	I0717 10:37:39.864121    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:37:39.864142    3636 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669952da}
	I0717 10:37:39.864158    3636 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x669951d0}
	I0717 10:37:39.864182    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Found match: 2:60:33:0:68:8b
	I0717 10:37:39.864197    3636 main.go:141] libmachine: (ha-572000-m02) DBG | IP: 192.169.0.6
	I0717 10:37:39.864229    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetConfigRaw
	I0717 10:37:39.865013    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:37:39.865242    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:37:39.865841    3636 machine.go:94] provisionDockerMachine start ...
	I0717 10:37:39.865853    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:39.866023    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:39.866160    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:39.866271    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:39.866402    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:39.866505    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:39.866622    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:39.866842    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:39.866854    3636 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:37:39.869683    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:37:39.878483    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:37:39.879603    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:37:39.879617    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:37:39.879624    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:37:39.879629    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:37:40.255889    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:37:40.255907    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:37:40.370491    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:37:40.370510    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:37:40.370520    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:37:40.370527    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:37:40.371371    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:37:40.371379    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:37:45.614184    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:45 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:37:45.614198    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:45 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:37:45.614209    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:45 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:37:45.638128    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:45 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:37:50.925250    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:37:50.925264    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:37:50.925388    3636 buildroot.go:166] provisioning hostname "ha-572000-m02"
	I0717 10:37:50.925396    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:37:50.925487    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:50.925569    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:50.925664    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:50.925753    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:50.925857    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:50.925992    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:50.926145    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:50.926154    3636 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000-m02 && echo "ha-572000-m02" | sudo tee /etc/hostname
	I0717 10:37:50.991059    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000-m02
	
	I0717 10:37:50.991079    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:50.991219    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:50.991316    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:50.991401    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:50.991492    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:50.991638    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:50.991791    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:50.991803    3636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:37:51.051090    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:37:51.051108    3636 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:37:51.051119    3636 buildroot.go:174] setting up certificates
	I0717 10:37:51.051126    3636 provision.go:84] configureAuth start
	I0717 10:37:51.051132    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:37:51.051276    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:37:51.051370    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:51.051458    3636 provision.go:143] copyHostCerts
	I0717 10:37:51.051492    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:37:51.051538    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:37:51.051544    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:37:51.051674    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:37:51.051883    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:37:51.051914    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:37:51.051919    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:37:51.052017    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:37:51.052173    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:37:51.052202    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:37:51.052207    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:37:51.052377    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:37:51.052529    3636 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000-m02 san=[127.0.0.1 192.169.0.6 ha-572000-m02 localhost minikube]
	I0717 10:37:51.118183    3636 provision.go:177] copyRemoteCerts
	I0717 10:37:51.118227    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:37:51.118240    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:51.118378    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:51.118485    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.118583    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:51.118673    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:37:51.152061    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:37:51.152130    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:37:51.171745    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:37:51.171819    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 10:37:51.192673    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:37:51.192744    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:37:51.212788    3636 provision.go:87] duration metric: took 161.649391ms to configureAuth
	I0717 10:37:51.212802    3636 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:37:51.212965    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:51.212978    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:51.213112    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:51.213224    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:51.213316    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.213411    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.213499    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:51.213614    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:51.213748    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:51.213755    3636 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:37:51.269367    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:37:51.269384    3636 buildroot.go:70] root file system type: tmpfs
	I0717 10:37:51.269468    3636 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:37:51.269484    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:51.269663    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:51.269800    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.269888    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.269973    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:51.270120    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:51.270267    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:51.270313    3636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:37:51.334311    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:37:51.334330    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:51.334460    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:51.334550    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.334644    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.334739    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:51.334864    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:51.335013    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:51.335026    3636 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:37:52.973251    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:37:52.973265    3636 machine.go:97] duration metric: took 13.107082478s to provisionDockerMachine
	I0717 10:37:52.973273    3636 start.go:293] postStartSetup for "ha-572000-m02" (driver="hyperkit")
	I0717 10:37:52.973280    3636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:37:52.973291    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:52.973486    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:37:52.973497    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:52.973604    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:52.973699    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:52.973791    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:52.973882    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:37:53.016888    3636 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:37:53.020683    3636 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:37:53.020693    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:37:53.020793    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:37:53.020968    3636 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:37:53.020974    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:37:53.021167    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:37:53.029813    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:37:53.057224    3636 start.go:296] duration metric: took 83.939886ms for postStartSetup
	I0717 10:37:53.057245    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:53.057420    3636 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:37:53.057442    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:53.057549    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:53.057634    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:53.057729    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:53.057811    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:37:53.091296    3636 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:37:53.091355    3636 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:37:53.145297    3636 fix.go:56] duration metric: took 13.389268028s for fixHost
	I0717 10:37:53.145323    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:53.145457    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:53.145570    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:53.145662    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:53.145747    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:53.145888    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:53.146033    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:53.146041    3636 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:37:53.200266    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237873.035451058
	
	I0717 10:37:53.200279    3636 fix.go:216] guest clock: 1721237873.035451058
	I0717 10:37:53.200284    3636 fix.go:229] Guest: 2024-07-17 10:37:53.035451058 -0700 PDT Remote: 2024-07-17 10:37:53.145313 -0700 PDT m=+32.018809214 (delta=-109.861942ms)
	I0717 10:37:53.200294    3636 fix.go:200] guest clock delta is within tolerance: -109.861942ms
	I0717 10:37:53.200298    3636 start.go:83] releasing machines lock for "ha-572000-m02", held for 13.44429115s
	I0717 10:37:53.200315    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:53.200436    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:37:53.222208    3636 out.go:177] * Found network options:
	I0717 10:37:53.243791    3636 out.go:177]   - NO_PROXY=192.169.0.5
	W0717 10:37:53.264601    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:37:53.264624    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:53.265081    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:53.265198    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:53.265269    3636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:37:53.265297    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	W0717 10:37:53.265332    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:37:53.265384    3636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:37:53.265387    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:53.265394    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:53.265518    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:53.265536    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:53.265639    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:53.265670    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:53.265728    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:37:53.265789    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:53.265871    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	W0717 10:37:53.294993    3636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:37:53.295059    3636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:37:53.339897    3636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:37:53.339919    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:37:53.340039    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:37:53.356231    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:37:53.365203    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:37:53.374127    3636 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:37:53.374184    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:37:53.382910    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:37:53.391778    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:37:53.400635    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:37:53.409795    3636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:37:53.418780    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:37:53.427594    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:37:53.436364    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:37:53.445437    3636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:37:53.453621    3636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:37:53.461634    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:53.558529    3636 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:37:53.577286    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:37:53.577360    3636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:37:53.591736    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:37:53.603521    3636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:37:53.618503    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:37:53.629064    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:37:53.639359    3636 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:37:53.658160    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:37:53.668814    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:37:53.683643    3636 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:37:53.686618    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:37:53.693926    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:37:53.707525    3636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:37:53.805691    3636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:37:53.920383    3636 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:37:53.920404    3636 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:37:53.934506    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:54.030259    3636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:37:56.344867    3636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.314525686s)
	I0717 10:37:56.344926    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:37:56.355390    3636 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 10:37:56.369820    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:37:56.380473    3636 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:37:56.479810    3636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:37:56.576860    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:56.671071    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:37:56.685037    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:37:56.696333    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:56.796692    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:37:56.861896    3636 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:37:56.861969    3636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:37:56.866672    3636 start.go:563] Will wait 60s for crictl version
	I0717 10:37:56.866724    3636 ssh_runner.go:195] Run: which crictl
	I0717 10:37:56.869877    3636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:37:56.896141    3636 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:37:56.896217    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:37:56.915592    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:37:56.953839    3636 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:37:56.975427    3636 out.go:177]   - env NO_PROXY=192.169.0.5
	I0717 10:37:56.996201    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:37:56.996608    3636 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:37:57.001171    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:37:57.011676    3636 mustload.go:65] Loading cluster: ha-572000
	I0717 10:37:57.011852    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:57.012113    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:57.012134    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:57.020969    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51980
	I0717 10:37:57.021367    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:57.021710    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:57.021724    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:57.021923    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:57.022051    3636 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:37:57.022138    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:57.022223    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3650
	I0717 10:37:57.023174    3636 host.go:66] Checking if "ha-572000" exists ...
	I0717 10:37:57.023426    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:57.023448    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:57.032019    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51982
	I0717 10:37:57.032378    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:57.032733    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:57.032749    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:57.032974    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:57.033082    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:57.033182    3636 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.6
	I0717 10:37:57.033189    3636 certs.go:194] generating shared ca certs ...
	I0717 10:37:57.033198    3636 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:37:57.033338    3636 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:37:57.033394    3636 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:37:57.033402    3636 certs.go:256] generating profile certs ...
	I0717 10:37:57.033489    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key
	I0717 10:37:57.033573    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.060f3240
	I0717 10:37:57.033624    3636 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key
	I0717 10:37:57.033631    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:37:57.033652    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:37:57.033672    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:37:57.033691    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:37:57.033708    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 10:37:57.033726    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 10:37:57.033744    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 10:37:57.033762    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 10:37:57.033840    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:37:57.033893    3636 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:37:57.033902    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:37:57.033938    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:37:57.033978    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:37:57.034008    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:37:57.034074    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:37:57.034108    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:37:57.034128    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:37:57.034146    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:57.034178    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:57.034270    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:57.034368    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:57.034458    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:57.034541    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:57.060171    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 10:37:57.063698    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 10:37:57.072274    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 10:37:57.075754    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0717 10:37:57.084043    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 10:37:57.087057    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 10:37:57.095232    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 10:37:57.098576    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0717 10:37:57.107451    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 10:37:57.110444    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 10:37:57.118613    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 10:37:57.121532    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 10:37:57.130217    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:37:57.149961    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:37:57.168914    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:37:57.188002    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:37:57.207206    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 10:37:57.226812    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 10:37:57.246070    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 10:37:57.265450    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 10:37:57.284420    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:37:57.303511    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:37:57.322687    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:37:57.341613    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 10:37:57.355190    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0717 10:37:57.368847    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 10:37:57.382513    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0717 10:37:57.395989    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 10:37:57.409357    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 10:37:57.423052    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 10:37:57.436932    3636 ssh_runner.go:195] Run: openssl version
	I0717 10:37:57.441057    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:37:57.450112    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:37:57.453386    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:37:57.453428    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:37:57.457514    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:37:57.466394    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:37:57.475362    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:37:57.478777    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:37:57.478819    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:37:57.482919    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:37:57.491931    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:37:57.500785    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:57.504034    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:57.504067    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:57.508244    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:37:57.517376    3636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:37:57.520713    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 10:37:57.524959    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 10:37:57.529259    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 10:37:57.533468    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 10:37:57.537834    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 10:37:57.542026    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 10:37:57.546248    3636 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.30.2 docker true true} ...
	I0717 10:37:57.546318    3636 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:37:57.546337    3636 kube-vip.go:115] generating kube-vip config ...
	I0717 10:37:57.546371    3636 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 10:37:57.559423    3636 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 10:37:57.559466    3636 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 10:37:57.559520    3636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:37:57.567774    3636 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:37:57.567817    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 10:37:57.575763    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0717 10:37:57.589137    3636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:37:57.602430    3636 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0717 10:37:57.616134    3636 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0717 10:37:57.619036    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:37:57.629004    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:57.726717    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:37:57.741206    3636 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:37:57.741389    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:57.762661    3636 out.go:177] * Verifying Kubernetes components...
	I0717 10:37:57.804314    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:57.930654    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:37:57.959022    3636 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:37:57.959251    3636 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa0a8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 10:37:57.959292    3636 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0717 10:37:57.959472    3636 node_ready.go:35] waiting up to 6m0s for node "ha-572000-m02" to be "Ready" ...
	I0717 10:37:57.959551    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:37:57.959557    3636 round_trippers.go:469] Request Headers:
	I0717 10:37:57.959564    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:37:57.959567    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.587526    3636 round_trippers.go:574] Response Status: 200 OK in 8627 milliseconds
	I0717 10:38:06.588080    3636 node_ready.go:49] node "ha-572000-m02" has status "Ready":"True"
	I0717 10:38:06.588093    3636 node_ready.go:38] duration metric: took 8.628386286s for node "ha-572000-m02" to be "Ready" ...
	I0717 10:38:06.588101    3636 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:38:06.588149    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:06.588155    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.588161    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.588168    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.624239    3636 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I0717 10:38:06.633134    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.633193    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:06.633198    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.633204    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.633210    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.642331    3636 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 10:38:06.642741    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:06.642749    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.642756    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.642759    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.645958    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:06.646753    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:06.646763    3636 pod_ready.go:81] duration metric: took 13.611341ms for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.646771    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.646808    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9dzd5
	I0717 10:38:06.646813    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.646818    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.646822    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.650165    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:06.650520    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:06.650527    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.650533    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.650538    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.652506    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:06.652830    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:06.652839    3636 pod_ready.go:81] duration metric: took 6.063342ms for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.652846    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.652883    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000
	I0717 10:38:06.652888    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.652894    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.652897    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.688343    3636 round_trippers.go:574] Response Status: 200 OK in 35 milliseconds
	I0717 10:38:06.688830    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:06.688842    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.688852    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.688855    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.691433    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:06.691756    3636 pod_ready.go:92] pod "etcd-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:06.691766    3636 pod_ready.go:81] duration metric: took 38.913354ms for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.691776    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.691822    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m02
	I0717 10:38:06.691828    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.691835    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.691841    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.722915    3636 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0717 10:38:06.723291    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:06.723298    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.723304    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.723309    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.762595    3636 round_trippers.go:574] Response Status: 200 OK in 39 milliseconds
	I0717 10:38:06.763038    3636 pod_ready.go:92] pod "etcd-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:06.763050    3636 pod_ready.go:81] duration metric: took 71.265447ms for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.763057    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.763098    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m03
	I0717 10:38:06.763103    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.763109    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.763112    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.766379    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:06.788728    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:06.788744    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.788750    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.788754    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.790975    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:06.791292    3636 pod_ready.go:92] pod "etcd-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:06.791302    3636 pod_ready.go:81] duration metric: took 28.239348ms for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.791319    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.988792    3636 request.go:629] Waited for 197.413405ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:38:06.988885    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:38:06.988891    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.988897    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.988903    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.991048    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:07.189095    3636 request.go:629] Waited for 197.524443ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:07.189132    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:07.189138    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:07.189146    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:07.189196    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:07.191472    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:07.191816    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:07.191825    3636 pod_ready.go:81] duration metric: took 400.490534ms for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:07.191832    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:07.388673    3636 request.go:629] Waited for 196.768491ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:38:07.388712    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:38:07.388717    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:07.388723    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:07.388726    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:07.390742    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:07.589477    3636 request.go:629] Waited for 198.180735ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:07.589514    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:07.589519    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:07.589526    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:07.589532    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:07.593904    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:38:07.594274    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:07.594283    3636 pod_ready.go:81] duration metric: took 402.436695ms for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:07.594290    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:07.789046    3636 request.go:629] Waited for 194.715768ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:38:07.789116    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:38:07.789122    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:07.789128    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:07.789134    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:07.791498    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:07.988262    3636 request.go:629] Waited for 196.319765ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:07.988331    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:07.988337    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:07.988344    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:07.988349    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:07.990665    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:07.990933    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:07.990943    3636 pod_ready.go:81] duration metric: took 396.637435ms for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:07.990949    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:08.189888    3636 request.go:629] Waited for 198.896315ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:38:08.189961    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:38:08.189968    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:08.189977    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:08.189982    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:08.192640    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:08.388942    3636 request.go:629] Waited for 195.85351ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:08.388998    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:08.389006    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:08.389019    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:08.389035    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:08.392574    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:08.392939    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:08.392951    3636 pod_ready.go:81] duration metric: took 401.985681ms for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:08.392963    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:08.589323    3636 request.go:629] Waited for 196.303012ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:38:08.589449    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:38:08.589461    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:08.589473    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:08.589481    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:08.592867    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:08.788589    3636 request.go:629] Waited for 195.011915ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:08.788634    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:08.788643    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:08.788654    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:08.788663    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:08.791468    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:08.791995    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:08.792019    3636 pod_ready.go:81] duration metric: took 399.039947ms for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:08.792032    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:08.990174    3636 request.go:629] Waited for 198.086662ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:38:08.990290    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:38:08.990300    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:08.990310    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:08.990317    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:08.993459    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:09.189555    3636 request.go:629] Waited for 195.556708ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:09.189686    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:09.189699    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:09.189710    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:09.189717    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:09.193157    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:09.193504    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:09.193518    3636 pod_ready.go:81] duration metric: took 401.469313ms for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:09.193543    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:09.389705    3636 request.go:629] Waited for 196.104363ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:38:09.389843    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:38:09.389855    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:09.389866    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:09.389872    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:09.393695    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:09.588443    3636 request.go:629] Waited for 194.213728ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:38:09.588571    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:38:09.588582    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:09.588591    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:09.588614    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:09.591794    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:09.592120    3636 pod_ready.go:92] pod "kube-proxy-5wcph" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:09.592130    3636 pod_ready.go:81] duration metric: took 398.566071ms for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:09.592136    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:09.789810    3636 request.go:629] Waited for 197.599858ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:38:09.789932    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:38:09.789953    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:09.789967    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:09.789977    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:09.793548    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:09.990128    3636 request.go:629] Waited for 195.990226ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:09.990259    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:09.990271    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:09.990282    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:09.990289    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:09.994401    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:38:09.995074    3636 pod_ready.go:92] pod "kube-proxy-h7k9z" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:09.995084    3636 pod_ready.go:81] duration metric: took 402.932164ms for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:09.995091    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:10.188412    3636 request.go:629] Waited for 193.228723ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:38:10.188460    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:38:10.188468    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:10.188479    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:10.188487    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:10.192053    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:10.389379    3636 request.go:629] Waited for 196.635202ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:10.389554    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:10.389574    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:10.389589    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:10.389599    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:10.393541    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:10.393889    3636 pod_ready.go:92] pod "kube-proxy-hst7h" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:10.393900    3636 pod_ready.go:81] duration metric: took 398.793558ms for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:10.393912    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:10.589752    3636 request.go:629] Waited for 195.757616ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:38:10.589810    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:38:10.589821    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:10.589833    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:10.589842    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:10.593161    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:10.789574    3636 request.go:629] Waited for 195.972483ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:10.789649    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:10.789655    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:10.789661    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:10.789665    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:10.792056    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:10.792456    3636 pod_ready.go:92] pod "kube-proxy-v6jxh" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:10.792465    3636 pod_ready.go:81] duration metric: took 398.537807ms for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:10.792472    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:10.990155    3636 request.go:629] Waited for 197.636631ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:38:10.990304    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:38:10.990316    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:10.990327    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:10.990333    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:10.993508    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:11.188937    3636 request.go:629] Waited for 194.57393ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:11.188967    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:11.188973    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:11.188979    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:11.188983    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:11.190738    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:11.191134    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:11.191144    3636 pod_ready.go:81] duration metric: took 398.656979ms for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:11.191150    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:11.388866    3636 request.go:629] Waited for 197.675969ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:38:11.388927    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:38:11.388931    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:11.388937    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:11.388941    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:11.390887    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:11.589661    3636 request.go:629] Waited for 198.35169ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:11.589745    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:11.589751    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:11.589759    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:11.589764    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:11.591880    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:11.592231    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:11.592240    3636 pod_ready.go:81] duration metric: took 401.075331ms for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:11.592247    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:11.790368    3636 request.go:629] Waited for 198.069219ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:38:11.790468    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:38:11.790479    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:11.790491    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:11.790498    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:11.793691    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:11.988391    3636 request.go:629] Waited for 194.130009ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:11.988514    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:11.988524    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:11.988535    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:11.988543    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:11.991587    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:11.991946    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:11.991960    3636 pod_ready.go:81] duration metric: took 399.692083ms for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:11.991969    3636 pod_ready.go:38] duration metric: took 5.403719656s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:38:11.991988    3636 api_server.go:52] waiting for apiserver process to appear ...
	I0717 10:38:11.992040    3636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:38:12.003855    3636 api_server.go:72] duration metric: took 14.26226374s to wait for apiserver process to appear ...
	I0717 10:38:12.003867    3636 api_server.go:88] waiting for apiserver healthz status ...
	I0717 10:38:12.003882    3636 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0717 10:38:12.008423    3636 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0717 10:38:12.008465    3636 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0717 10:38:12.008471    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:12.008478    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:12.008481    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:12.009101    3636 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:38:12.009162    3636 api_server.go:141] control plane version: v1.30.2
	I0717 10:38:12.009171    3636 api_server.go:131] duration metric: took 5.299116ms to wait for apiserver health ...
	I0717 10:38:12.009178    3636 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 10:38:12.189013    3636 request.go:629] Waited for 179.768156ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:12.189094    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:12.189102    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:12.189111    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:12.189116    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:12.194083    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:38:12.199463    3636 system_pods.go:59] 26 kube-system pods found
	I0717 10:38:12.199478    3636 system_pods.go:61] "coredns-7db6d8ff4d-2phrp" [e76a91cf-8a14-454c-baee-7c0128645e0e] Running
	I0717 10:38:12.199495    3636 system_pods.go:61] "coredns-7db6d8ff4d-9dzd5" [f9a7cff1-56a3-4600-ae2f-a951dda10753] Running
	I0717 10:38:12.199501    3636 system_pods.go:61] "etcd-ha-572000" [2d7b717e-d404-4c63-afe9-799de6711964] Running
	I0717 10:38:12.199505    3636 system_pods.go:61] "etcd-ha-572000-m02" [8abc7662-c159-4953-9aba-11a75b4e7d65] Running
	I0717 10:38:12.199509    3636 system_pods.go:61] "etcd-ha-572000-m03" [78629908-d362-4a27-933f-2b929867c22f] Running
	I0717 10:38:12.199518    3636 system_pods.go:61] "kindnet-5xsrp" [0b84aa4d-7452-47f6-8052-f063e2e8ef7b] Running
	I0717 10:38:12.199521    3636 system_pods.go:61] "kindnet-72zfp" [0cc735c4-08b3-499a-b9e2-8c38377f371b] Running
	I0717 10:38:12.199524    3636 system_pods.go:61] "kindnet-g2m92" [be3d84cf-f9e8-426c-b02a-49eb7eed9d6a] Running
	I0717 10:38:12.199526    3636 system_pods.go:61] "kindnet-t85bv" [8649cc70-8ce8-4caf-938e-bd253fa5b7ae] Running
	I0717 10:38:12.199530    3636 system_pods.go:61] "kube-apiserver-ha-572000" [6409591d-7a50-414b-be63-44d5ddb0b0e0] Running
	I0717 10:38:12.199532    3636 system_pods.go:61] "kube-apiserver-ha-572000-m02" [d658459c-67f9-4798-87f2-c651d830af35] Running
	I0717 10:38:12.199535    3636 system_pods.go:61] "kube-apiserver-ha-572000-m03" [ac1c02f1-f023-4ad2-99bc-2657fb9d50d7] Running
	I0717 10:38:12.199538    3636 system_pods.go:61] "kube-controller-manager-ha-572000" [075c4d23-04bd-404a-822d-ae7d326a68ac] Running
	I0717 10:38:12.199541    3636 system_pods.go:61] "kube-controller-manager-ha-572000-m02" [3014bdb3-bb86-48d5-a045-2a8dab026bd8] Running
	I0717 10:38:12.199544    3636 system_pods.go:61] "kube-controller-manager-ha-572000-m03" [3b76e220-3a5b-4dac-8f69-6b380799d2ef] Running
	I0717 10:38:12.199546    3636 system_pods.go:61] "kube-proxy-5wcph" [731f5b57-131e-4e97-b47a-036b8d4edbcd] Running
	I0717 10:38:12.199553    3636 system_pods.go:61] "kube-proxy-h7k9z" [cf687084-b40d-43b6-9476-8c60c5f37d1d] Running
	I0717 10:38:12.199557    3636 system_pods.go:61] "kube-proxy-hst7h" [2b7c8d2c-3d71-4357-9429-1d8438c446f5] Running
	I0717 10:38:12.199559    3636 system_pods.go:61] "kube-proxy-v6jxh" [3f952fc8-747f-49da-b400-5212dba538d8] Running
	I0717 10:38:12.199565    3636 system_pods.go:61] "kube-scheduler-ha-572000" [da36a422-ba49-4cd6-8f47-9be7c43657be] Running
	I0717 10:38:12.199568    3636 system_pods.go:61] "kube-scheduler-ha-572000-m02" [13e289a2-8d3c-478f-9220-1d114dd5bf62] Running
	I0717 10:38:12.199571    3636 system_pods.go:61] "kube-scheduler-ha-572000-m03" [428716db-8096-4b57-9f90-e706d5683852] Running
	I0717 10:38:12.199573    3636 system_pods.go:61] "kube-vip-ha-572000" [289bb6df-c101-49d3-9e4a-6c0ecdd26551] Running
	I0717 10:38:12.199576    3636 system_pods.go:61] "kube-vip-ha-572000-m02" [fa57b8d2-bc41-42af-9d9a-1b68f882e6fe] Running
	I0717 10:38:12.199579    3636 system_pods.go:61] "kube-vip-ha-572000-m03" [a35c5007-dfe3-4b58-b16c-d568b8f9c1c2] Running
	I0717 10:38:12.199581    3636 system_pods.go:61] "storage-provisioner" [1801216d-6529-4049-b874-1577132fc03f] Running
	I0717 10:38:12.199585    3636 system_pods.go:74] duration metric: took 190.398086ms to wait for pod list to return data ...
	I0717 10:38:12.199592    3636 default_sa.go:34] waiting for default service account to be created ...
	I0717 10:38:12.388401    3636 request.go:629] Waited for 188.727547ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0717 10:38:12.388434    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0717 10:38:12.388439    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:12.388445    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:12.388449    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:12.390736    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:12.390877    3636 default_sa.go:45] found service account: "default"
	I0717 10:38:12.390886    3636 default_sa.go:55] duration metric: took 191.284842ms for default service account to be created ...
	I0717 10:38:12.390892    3636 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 10:38:12.588992    3636 request.go:629] Waited for 198.054942ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:12.589092    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:12.589101    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:12.589115    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:12.589123    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:12.595003    3636 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 10:38:12.599941    3636 system_pods.go:86] 26 kube-system pods found
	I0717 10:38:12.599953    3636 system_pods.go:89] "coredns-7db6d8ff4d-2phrp" [e76a91cf-8a14-454c-baee-7c0128645e0e] Running
	I0717 10:38:12.599962    3636 system_pods.go:89] "coredns-7db6d8ff4d-9dzd5" [f9a7cff1-56a3-4600-ae2f-a951dda10753] Running
	I0717 10:38:12.599966    3636 system_pods.go:89] "etcd-ha-572000" [2d7b717e-d404-4c63-afe9-799de6711964] Running
	I0717 10:38:12.599970    3636 system_pods.go:89] "etcd-ha-572000-m02" [8abc7662-c159-4953-9aba-11a75b4e7d65] Running
	I0717 10:38:12.599986    3636 system_pods.go:89] "etcd-ha-572000-m03" [78629908-d362-4a27-933f-2b929867c22f] Running
	I0717 10:38:12.599992    3636 system_pods.go:89] "kindnet-5xsrp" [0b84aa4d-7452-47f6-8052-f063e2e8ef7b] Running
	I0717 10:38:12.599996    3636 system_pods.go:89] "kindnet-72zfp" [0cc735c4-08b3-499a-b9e2-8c38377f371b] Running
	I0717 10:38:12.599999    3636 system_pods.go:89] "kindnet-g2m92" [be3d84cf-f9e8-426c-b02a-49eb7eed9d6a] Running
	I0717 10:38:12.600003    3636 system_pods.go:89] "kindnet-t85bv" [8649cc70-8ce8-4caf-938e-bd253fa5b7ae] Running
	I0717 10:38:12.600007    3636 system_pods.go:89] "kube-apiserver-ha-572000" [6409591d-7a50-414b-be63-44d5ddb0b0e0] Running
	I0717 10:38:12.600010    3636 system_pods.go:89] "kube-apiserver-ha-572000-m02" [d658459c-67f9-4798-87f2-c651d830af35] Running
	I0717 10:38:12.600014    3636 system_pods.go:89] "kube-apiserver-ha-572000-m03" [ac1c02f1-f023-4ad2-99bc-2657fb9d50d7] Running
	I0717 10:38:12.600018    3636 system_pods.go:89] "kube-controller-manager-ha-572000" [075c4d23-04bd-404a-822d-ae7d326a68ac] Running
	I0717 10:38:12.600021    3636 system_pods.go:89] "kube-controller-manager-ha-572000-m02" [3014bdb3-bb86-48d5-a045-2a8dab026bd8] Running
	I0717 10:38:12.600024    3636 system_pods.go:89] "kube-controller-manager-ha-572000-m03" [3b76e220-3a5b-4dac-8f69-6b380799d2ef] Running
	I0717 10:38:12.600028    3636 system_pods.go:89] "kube-proxy-5wcph" [731f5b57-131e-4e97-b47a-036b8d4edbcd] Running
	I0717 10:38:12.600031    3636 system_pods.go:89] "kube-proxy-h7k9z" [cf687084-b40d-43b6-9476-8c60c5f37d1d] Running
	I0717 10:38:12.600035    3636 system_pods.go:89] "kube-proxy-hst7h" [2b7c8d2c-3d71-4357-9429-1d8438c446f5] Running
	I0717 10:38:12.600038    3636 system_pods.go:89] "kube-proxy-v6jxh" [3f952fc8-747f-49da-b400-5212dba538d8] Running
	I0717 10:38:12.600041    3636 system_pods.go:89] "kube-scheduler-ha-572000" [da36a422-ba49-4cd6-8f47-9be7c43657be] Running
	I0717 10:38:12.600044    3636 system_pods.go:89] "kube-scheduler-ha-572000-m02" [13e289a2-8d3c-478f-9220-1d114dd5bf62] Running
	I0717 10:38:12.600048    3636 system_pods.go:89] "kube-scheduler-ha-572000-m03" [428716db-8096-4b57-9f90-e706d5683852] Running
	I0717 10:38:12.600051    3636 system_pods.go:89] "kube-vip-ha-572000" [289bb6df-c101-49d3-9e4a-6c0ecdd26551] Running
	I0717 10:38:12.600054    3636 system_pods.go:89] "kube-vip-ha-572000-m02" [fa57b8d2-bc41-42af-9d9a-1b68f882e6fe] Running
	I0717 10:38:12.600058    3636 system_pods.go:89] "kube-vip-ha-572000-m03" [a35c5007-dfe3-4b58-b16c-d568b8f9c1c2] Running
	I0717 10:38:12.600061    3636 system_pods.go:89] "storage-provisioner" [1801216d-6529-4049-b874-1577132fc03f] Running
	I0717 10:38:12.600065    3636 system_pods.go:126] duration metric: took 209.164597ms to wait for k8s-apps to be running ...
	I0717 10:38:12.600076    3636 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 10:38:12.600137    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:38:12.610524    3636 system_svc.go:56] duration metric: took 10.448568ms WaitForService to wait for kubelet
	I0717 10:38:12.610538    3636 kubeadm.go:582] duration metric: took 14.868933199s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:38:12.610564    3636 node_conditions.go:102] verifying NodePressure condition ...
	I0717 10:38:12.789306    3636 request.go:629] Waited for 178.678322ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0717 10:38:12.789427    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0717 10:38:12.789438    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:12.789448    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:12.789457    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:12.793007    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:12.794084    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:38:12.794097    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:38:12.794107    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:38:12.794110    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:38:12.794114    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:38:12.794122    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:38:12.794126    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:38:12.794129    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:38:12.794133    3636 node_conditions.go:105] duration metric: took 183.560156ms to run NodePressure ...
	I0717 10:38:12.794140    3636 start.go:241] waiting for startup goroutines ...
	I0717 10:38:12.794158    3636 start.go:255] writing updated cluster config ...
	I0717 10:38:12.815984    3636 out.go:177] 
	I0717 10:38:12.836616    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:38:12.836683    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:38:12.857448    3636 out.go:177] * Starting "ha-572000-m03" control-plane node in "ha-572000" cluster
	I0717 10:38:12.899463    3636 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:38:12.899506    3636 cache.go:56] Caching tarball of preloaded images
	I0717 10:38:12.899666    3636 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:38:12.899684    3636 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:38:12.899813    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:38:12.900669    3636 start.go:360] acquireMachinesLock for ha-572000-m03: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:38:12.900765    3636 start.go:364] duration metric: took 73.243µs to acquireMachinesLock for "ha-572000-m03"
	I0717 10:38:12.900790    3636 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:38:12.900816    3636 fix.go:54] fixHost starting: m03
	I0717 10:38:12.901158    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:38:12.901182    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:38:12.910100    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51987
	I0717 10:38:12.910428    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:38:12.910808    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:38:12.910824    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:38:12.911027    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:38:12.911151    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:12.911236    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetState
	I0717 10:38:12.911315    3636 main.go:141] libmachine: (ha-572000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:38:12.911405    3636 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid from json: 2972
	I0717 10:38:12.912336    3636 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid 2972 missing from process table
	I0717 10:38:12.912361    3636 fix.go:112] recreateIfNeeded on ha-572000-m03: state=Stopped err=<nil>
	I0717 10:38:12.912369    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	W0717 10:38:12.912452    3636 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:38:12.933536    3636 out.go:177] * Restarting existing hyperkit VM for "ha-572000-m03" ...
	I0717 10:38:12.975448    3636 main.go:141] libmachine: (ha-572000-m03) Calling .Start
	I0717 10:38:12.975666    3636 main.go:141] libmachine: (ha-572000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:38:12.975716    3636 main.go:141] libmachine: (ha-572000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/hyperkit.pid
	I0717 10:38:12.977484    3636 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid 2972 missing from process table
	I0717 10:38:12.977496    3636 main.go:141] libmachine: (ha-572000-m03) DBG | pid 2972 is in state "Stopped"
	I0717 10:38:12.977512    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/hyperkit.pid...
	I0717 10:38:12.977862    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Using UUID 5064fb5d-6e32-4be4-8d75-15b09204e5f5
	I0717 10:38:13.005572    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Generated MAC 6e:d3:62:da:43:cf
	I0717 10:38:13.005591    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:38:13.005736    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5064fb5d-6e32-4be4-8d75-15b09204e5f5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acc00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:38:13.005764    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5064fb5d-6e32-4be4-8d75-15b09204e5f5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acc00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:38:13.005828    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5064fb5d-6e32-4be4-8d75-15b09204e5f5", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/ha-572000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machine
s/ha-572000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:38:13.005888    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5064fb5d-6e32-4be4-8d75-15b09204e5f5 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/ha-572000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:38:13.005909    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:38:13.007252    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: Pid is 3665
	I0717 10:38:13.007703    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Attempt 0
	I0717 10:38:13.007718    3636 main.go:141] libmachine: (ha-572000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:38:13.007809    3636 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid from json: 3665
	I0717 10:38:13.009827    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Searching for 6e:d3:62:da:43:cf in /var/db/dhcpd_leases ...
	I0717 10:38:13.009874    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:38:13.009921    3636 main.go:141] libmachine: (ha-572000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x669952ec}
	I0717 10:38:13.009945    3636 main.go:141] libmachine: (ha-572000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669952da}
	I0717 10:38:13.009959    3636 main.go:141] libmachine: (ha-572000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:37:45:6a:f1:7f ID:1,1e:37:45:6a:f1:7f Lease:0x6698001b}
	I0717 10:38:13.009965    3636 main.go:141] libmachine: (ha-572000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:d3:62:da:43:cf ID:1,6e:d3:62:da:43:cf Lease:0x669950e4}
	I0717 10:38:13.009979    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetConfigRaw
	I0717 10:38:13.009982    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Found match: 6e:d3:62:da:43:cf
	I0717 10:38:13.009992    3636 main.go:141] libmachine: (ha-572000-m03) DBG | IP: 192.169.0.7
	I0717 10:38:13.010657    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetIP
	I0717 10:38:13.010834    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:38:13.011336    3636 machine.go:94] provisionDockerMachine start ...
	I0717 10:38:13.011346    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:13.011471    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:13.011562    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:13.011675    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:13.011768    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:13.011883    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:13.012034    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:13.012203    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:13.012211    3636 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:38:13.014976    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:38:13.023104    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:38:13.024110    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:38:13.024135    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:38:13.024157    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:38:13.024175    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:38:13.404157    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:38:13.404173    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:38:13.519656    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:38:13.519690    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:38:13.519727    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:38:13.519751    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:38:13.520524    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:38:13.520534    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:38:18.810258    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:18 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0717 10:38:18.810297    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:18 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0717 10:38:18.810307    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:18 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0717 10:38:18.834790    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:18 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0717 10:38:24.076646    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:38:24.076665    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetMachineName
	I0717 10:38:24.076790    3636 buildroot.go:166] provisioning hostname "ha-572000-m03"
	I0717 10:38:24.076802    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetMachineName
	I0717 10:38:24.076886    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.077024    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.077111    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.077196    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.077278    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.077404    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:24.077556    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:24.077565    3636 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000-m03 && echo "ha-572000-m03" | sudo tee /etc/hostname
	I0717 10:38:24.142857    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000-m03
	
	I0717 10:38:24.142872    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.143001    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.143104    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.143196    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.143280    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.143395    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:24.143539    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:24.143551    3636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:38:24.203331    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:38:24.203349    3636 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:38:24.203359    3636 buildroot.go:174] setting up certificates
	I0717 10:38:24.203364    3636 provision.go:84] configureAuth start
	I0717 10:38:24.203370    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetMachineName
	I0717 10:38:24.203518    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetIP
	I0717 10:38:24.203623    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.203721    3636 provision.go:143] copyHostCerts
	I0717 10:38:24.203751    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:38:24.203800    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:38:24.203806    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:38:24.203931    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:38:24.204144    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:38:24.204174    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:38:24.204179    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:38:24.204294    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:38:24.204463    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:38:24.204496    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:38:24.204500    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:38:24.204570    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:38:24.204726    3636 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000-m03 san=[127.0.0.1 192.169.0.7 ha-572000-m03 localhost minikube]
	I0717 10:38:24.389534    3636 provision.go:177] copyRemoteCerts
	I0717 10:38:24.389582    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:38:24.389597    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.389749    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.389840    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.389936    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.390018    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	I0717 10:38:24.424587    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:38:24.424660    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:38:24.444455    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:38:24.444522    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 10:38:24.465006    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:38:24.465071    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:38:24.485065    3636 provision.go:87] duration metric: took 281.685984ms to configureAuth
	I0717 10:38:24.485079    3636 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:38:24.485254    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:38:24.485268    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:24.485399    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.485509    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.485606    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.485695    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.485780    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.485889    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:24.486018    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:24.486026    3636 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:38:24.539772    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:38:24.539786    3636 buildroot.go:70] root file system type: tmpfs
	I0717 10:38:24.539874    3636 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:38:24.539885    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.540019    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.540102    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.540205    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.540313    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.540462    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:24.540607    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:24.540655    3636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:38:24.605074    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:38:24.605091    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.605230    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.605339    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.605424    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.605494    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.605620    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:24.605771    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:24.605784    3636 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:38:26.231394    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:38:26.231416    3636 machine.go:97] duration metric: took 13.21973714s to provisionDockerMachine
	I0717 10:38:26.231428    3636 start.go:293] postStartSetup for "ha-572000-m03" (driver="hyperkit")
	I0717 10:38:26.231437    3636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:38:26.231448    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.231633    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:38:26.231652    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:26.231764    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:26.231872    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.231959    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:26.232054    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	I0717 10:38:26.266647    3636 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:38:26.269791    3636 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:38:26.269801    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:38:26.269897    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:38:26.270060    3636 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:38:26.270067    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:38:26.270227    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:38:26.278127    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:38:26.297704    3636 start.go:296] duration metric: took 66.264765ms for postStartSetup
	I0717 10:38:26.297725    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.297894    3636 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:38:26.297906    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:26.297982    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:26.298095    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.298185    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:26.298259    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	I0717 10:38:26.332566    3636 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:38:26.332629    3636 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:38:26.364567    3636 fix.go:56] duration metric: took 13.463410955s for fixHost
	I0717 10:38:26.364593    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:26.364774    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:26.364878    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.364991    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.365075    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:26.365213    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:26.365360    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:26.365368    3636 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:38:26.420992    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237906.507932482
	
	I0717 10:38:26.421006    3636 fix.go:216] guest clock: 1721237906.507932482
	I0717 10:38:26.421017    3636 fix.go:229] Guest: 2024-07-17 10:38:26.507932482 -0700 PDT Remote: 2024-07-17 10:38:26.364583 -0700 PDT m=+65.237237021 (delta=143.349482ms)
	I0717 10:38:26.421032    3636 fix.go:200] guest clock delta is within tolerance: 143.349482ms
	I0717 10:38:26.421036    3636 start.go:83] releasing machines lock for "ha-572000-m03", held for 13.519917261s
	I0717 10:38:26.421054    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.421181    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetIP
	I0717 10:38:26.443010    3636 out.go:177] * Found network options:
	I0717 10:38:26.464409    3636 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0717 10:38:26.487460    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:38:26.487486    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:38:26.487503    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.488209    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.488434    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.488546    3636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:38:26.488583    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	W0717 10:38:26.488701    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:38:26.488736    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:38:26.488809    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:26.488843    3636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:38:26.488855    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:26.489040    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.489074    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:26.489211    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:26.489222    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.489320    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	I0717 10:38:26.489386    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:26.489533    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	W0717 10:38:26.520778    3636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:38:26.520842    3636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:38:26.572109    3636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:38:26.572138    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:38:26.572238    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:38:26.587958    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:38:26.596058    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:38:26.604066    3636 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:38:26.604116    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:38:26.612485    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:38:26.620942    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:38:26.629083    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:38:26.637275    3636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:38:26.645515    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:38:26.653717    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:38:26.662055    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:38:26.670484    3636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:38:26.677700    3636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:38:26.684962    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:26.781787    3636 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:38:26.802958    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:38:26.803029    3636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:38:26.827692    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:38:26.840860    3636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:38:26.869195    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:38:26.881705    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:38:26.892987    3636 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:38:26.911733    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:38:26.922817    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:38:26.938911    3636 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:38:26.941995    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:38:26.951587    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:38:26.965318    3636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:38:27.062809    3636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:38:27.181748    3636 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:38:27.181774    3636 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:38:27.195694    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:27.293396    3636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:38:29.632743    3636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.339268733s)
	I0717 10:38:29.632812    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:38:29.643610    3636 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 10:38:29.657480    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:38:29.668578    3636 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:38:29.772887    3636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:38:29.887343    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:29.983127    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:38:29.998340    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:38:30.010843    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:30.124553    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:38:30.193605    3636 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:38:30.193684    3636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:38:30.198773    3636 start.go:563] Will wait 60s for crictl version
	I0717 10:38:30.198857    3636 ssh_runner.go:195] Run: which crictl
	I0717 10:38:30.202846    3636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:38:30.233816    3636 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:38:30.233915    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:38:30.253337    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:38:30.311688    3636 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:38:30.384020    3636 out.go:177]   - env NO_PROXY=192.169.0.5
	I0717 10:38:30.444054    3636 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0717 10:38:30.480967    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetIP
	I0717 10:38:30.481248    3636 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:38:30.485047    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:38:30.495793    3636 mustload.go:65] Loading cluster: ha-572000
	I0717 10:38:30.495976    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:38:30.496198    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:38:30.496221    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:38:30.505198    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52009
	I0717 10:38:30.505558    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:38:30.505932    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:38:30.505942    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:38:30.506222    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:38:30.506342    3636 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:38:30.506437    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:38:30.506526    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3650
	I0717 10:38:30.507493    3636 host.go:66] Checking if "ha-572000" exists ...
	I0717 10:38:30.507764    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:38:30.507798    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:38:30.516606    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52011
	I0717 10:38:30.516943    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:38:30.517270    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:38:30.517281    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:38:30.517513    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:38:30.517630    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:38:30.517732    3636 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.7
	I0717 10:38:30.517737    3636 certs.go:194] generating shared ca certs ...
	I0717 10:38:30.517751    3636 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:38:30.517912    3636 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:38:30.517964    3636 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:38:30.517973    3636 certs.go:256] generating profile certs ...
	I0717 10:38:30.518074    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key
	I0717 10:38:30.518169    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.562e5459
	I0717 10:38:30.518222    3636 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key
	I0717 10:38:30.518229    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:38:30.518253    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:38:30.518273    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:38:30.518296    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:38:30.518321    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 10:38:30.518340    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 10:38:30.518358    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 10:38:30.518375    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 10:38:30.518476    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:38:30.518520    3636 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:38:30.518529    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:38:30.518566    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:38:30.518602    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:38:30.518634    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:38:30.518702    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:38:30.518736    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:38:30.518764    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:38:30.518783    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:38:30.518808    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:38:30.518899    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:38:30.518987    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:38:30.519076    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:38:30.519152    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:38:30.544343    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 10:38:30.547913    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 10:38:30.557636    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 10:38:30.561333    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0717 10:38:30.570252    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 10:38:30.573631    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 10:38:30.582360    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 10:38:30.585629    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0717 10:38:30.593318    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 10:38:30.596412    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 10:38:30.604690    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 10:38:30.607967    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 10:38:30.616462    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:38:30.638619    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:38:30.660075    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:38:30.679834    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:38:30.699712    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 10:38:30.720095    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 10:38:30.740379    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 10:38:30.760837    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 10:38:30.780662    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:38:30.800982    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:38:30.821007    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:38:30.841019    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 10:38:30.855040    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0717 10:38:30.868897    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 10:38:30.882296    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0717 10:38:30.895884    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 10:38:30.909514    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 10:38:30.923253    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 10:38:30.937006    3636 ssh_runner.go:195] Run: openssl version
	I0717 10:38:30.941436    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:38:30.950257    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:38:30.955139    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:38:30.955192    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:38:30.959572    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:38:30.968160    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:38:30.976579    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:38:30.980025    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:38:30.980067    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:38:30.984288    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:38:30.992609    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:38:31.001221    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:38:31.004796    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:38:31.004841    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:38:31.009065    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:38:31.017464    3636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:38:31.021030    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 10:38:31.025586    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 10:38:31.029983    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 10:38:31.034293    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 10:38:31.038625    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 10:38:31.042961    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 10:38:31.047275    3636 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.30.2 docker true true} ...
	I0717 10:38:31.047334    3636 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:38:31.047351    3636 kube-vip.go:115] generating kube-vip config ...
	I0717 10:38:31.047388    3636 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 10:38:31.059333    3636 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 10:38:31.059386    3636 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 10:38:31.059445    3636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:38:31.067249    3636 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:38:31.067300    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 10:38:31.075304    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0717 10:38:31.088747    3636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:38:31.102087    3636 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0717 10:38:31.115605    3636 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0717 10:38:31.118396    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:38:31.128499    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:31.224486    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:38:31.238639    3636 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:38:31.238848    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:38:31.259920    3636 out.go:177] * Verifying Kubernetes components...
	I0717 10:38:31.280661    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:31.399137    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:38:31.415018    3636 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:38:31.415346    3636 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa0a8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 10:38:31.415404    3636 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0717 10:38:31.415666    3636 node_ready.go:35] waiting up to 6m0s for node "ha-572000-m03" to be "Ready" ...
	I0717 10:38:31.415725    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:31.415732    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.415740    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.415745    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.421957    3636 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 10:38:31.422260    3636 node_ready.go:49] node "ha-572000-m03" has status "Ready":"True"
	I0717 10:38:31.422274    3636 node_ready.go:38] duration metric: took 6.596243ms for node "ha-572000-m03" to be "Ready" ...
	I0717 10:38:31.422281    3636 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:38:31.422331    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:31.422337    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.422343    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.422347    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.431073    3636 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 10:38:31.436681    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:31.436766    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:31.436772    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.436778    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.436782    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.440248    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:31.440722    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:31.440730    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.440735    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.440738    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.442939    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:31.937618    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:31.937636    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.937668    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.937673    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.940388    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:31.940820    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:31.940828    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.940834    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.940838    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.943159    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:32.437866    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:32.437879    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:32.437885    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:32.437888    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:32.446284    3636 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 10:38:32.446927    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:32.446936    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:32.446943    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:32.446948    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:32.452237    3636 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 10:38:32.937878    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:32.937890    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:32.937896    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:32.937901    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:32.940439    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:32.941049    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:32.941057    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:32.941064    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:32.941080    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:32.943760    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:33.437735    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:33.437751    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:33.437757    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:33.437760    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:33.440741    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:33.441277    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:33.441285    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:33.441291    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:33.441302    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:33.443897    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:33.444546    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:33.938758    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:33.938781    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:33.938787    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:33.938791    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:33.941068    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:33.941437    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:33.941445    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:33.941451    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:33.941462    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:33.943283    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:34.437334    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:34.437347    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:34.437354    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:34.437357    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:34.440066    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:34.440546    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:34.440554    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:34.440560    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:34.440563    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:34.442659    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:34.938574    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:34.938586    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:34.938593    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:34.938602    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:34.941243    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:34.941810    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:34.941818    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:34.941824    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:34.941827    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:34.943881    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:35.437928    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:35.437948    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:35.437959    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:35.437965    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:35.441416    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:35.441923    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:35.441931    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:35.441937    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:35.441941    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:35.443781    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:35.937111    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:35.937132    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:35.937144    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:35.937149    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:35.941097    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:35.941689    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:35.941702    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:35.941708    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:35.941711    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:35.943483    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:35.943912    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:36.437284    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:36.437298    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:36.437304    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:36.437308    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:36.439570    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:36.440110    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:36.440117    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:36.440127    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:36.440130    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:36.441781    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:36.938251    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:36.938279    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:36.938357    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:36.938372    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:36.941451    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:36.942095    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:36.942103    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:36.942109    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:36.942112    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:36.943809    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:37.438234    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:37.438246    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:37.438251    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:37.438256    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:37.440243    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:37.440651    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:37.440658    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:37.440664    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:37.440674    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:37.442390    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:37.938519    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:37.938538    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:37.938588    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:37.938592    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:37.940708    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:37.941242    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:37.941250    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:37.941256    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:37.941260    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:37.942969    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:38.437210    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:38.437229    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:38.437263    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:38.437275    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:38.440621    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:38.441113    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:38.441120    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:38.441126    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:38.441130    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:38.444813    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:38.445187    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:38.937338    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:38.937354    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:38.937363    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:38.937368    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:38.939598    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:38.940020    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:38.940027    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:38.940033    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:38.940038    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:38.941562    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:39.437538    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:39.437553    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:39.437563    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:39.437566    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:39.439993    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:39.440392    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:39.440400    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:39.440405    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:39.440408    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:39.442187    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:39.938827    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:39.938859    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:39.938867    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:39.938871    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:39.941007    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:39.941470    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:39.941477    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:39.941482    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:39.941486    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:39.943155    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:40.437526    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:40.437540    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:40.437546    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:40.437550    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:40.439587    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:40.440056    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:40.440063    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:40.440068    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:40.440072    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:40.441961    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:40.937672    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:40.937688    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:40.937697    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:40.937701    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:40.940217    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:40.940568    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:40.940576    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:40.940581    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:40.940585    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:40.942351    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:40.942718    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:41.437331    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:41.437344    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:41.437350    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:41.437354    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:41.439766    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:41.440280    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:41.440287    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:41.440293    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:41.440296    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:41.441965    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:41.938758    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:41.938778    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:41.938790    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:41.938798    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:41.941589    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:41.942137    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:41.942146    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:41.942152    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:41.942157    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:41.943723    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:42.438172    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:42.438185    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:42.438194    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:42.438198    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:42.440429    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:42.440980    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:42.440988    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:42.440994    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:42.440998    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:42.442893    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:42.938134    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:42.938172    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:42.938183    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:42.938191    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:42.940744    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:42.941114    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:42.941122    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:42.941127    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:42.941131    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:42.942787    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:42.943905    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:43.438163    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:43.438195    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:43.438217    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:43.438224    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:43.440858    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:43.441272    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:43.441279    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:43.441288    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:43.441292    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:43.443069    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:43.937578    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:43.937589    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:43.937596    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:43.937599    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:43.939582    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:43.940136    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:43.940144    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:43.940150    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:43.940152    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:43.941646    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:44.437231    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:44.437244    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:44.437250    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:44.437254    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:44.439651    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:44.440190    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:44.440197    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:44.440202    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:44.440206    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:44.442158    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:44.937185    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:44.937196    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:44.937203    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:44.937206    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:44.939361    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:44.939788    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:44.939796    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:44.939802    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:44.939805    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:44.941482    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:45.437377    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:45.437392    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:45.437401    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:45.437406    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:45.439768    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:45.440303    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:45.440311    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:45.440317    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:45.440320    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:45.441925    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:45.442312    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:45.939181    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:45.939236    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:45.939246    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:45.939253    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:45.941938    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:45.942549    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:45.942557    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:45.942563    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:45.942566    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:45.944281    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:46.437228    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:46.437238    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:46.437245    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:46.437248    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:46.439099    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:46.439744    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:46.439751    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:46.439757    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:46.439760    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:46.441200    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:46.938133    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:46.938186    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:46.938196    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:46.938202    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:46.940467    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:46.940876    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:46.940884    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:46.940890    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:46.940893    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:46.942527    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:47.437838    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:47.437850    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:47.437857    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:47.437861    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:47.440152    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:47.440651    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:47.440660    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:47.440665    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:47.440669    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:47.442745    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:47.443107    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:47.937851    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:47.937867    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:47.937873    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:47.937876    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:47.940047    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:47.940510    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:47.940517    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:47.940523    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:47.940530    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:47.942242    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:48.439255    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:48.439310    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:48.439329    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:48.439338    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:48.442468    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:48.443256    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:48.443264    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:48.443269    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:48.443272    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:48.444868    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:48.937733    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:48.937744    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:48.937750    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:48.937753    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:48.939731    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:48.940190    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:48.940198    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:48.940204    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:48.940207    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:48.941747    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:49.438149    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:49.438169    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:49.438181    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:49.438190    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:49.441135    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:49.441712    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:49.441721    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:49.441726    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:49.441738    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:49.443421    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:49.443800    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:49.937835    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:49.937887    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:49.937895    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:49.937905    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:49.940121    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:49.940667    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:49.940674    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:49.940680    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:49.940698    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:49.942630    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:50.438458    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:50.438469    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:50.438476    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:50.438483    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:50.440697    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:50.441412    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:50.441420    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:50.441426    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:50.441430    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:50.443161    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:50.937976    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:50.937995    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:50.938003    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:50.938009    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:50.940796    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:50.941307    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:50.941315    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:50.941320    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:50.941323    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:50.943029    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:51.437692    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:51.437705    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:51.437714    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:51.437720    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:51.440262    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:51.440918    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:51.440926    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:51.440932    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:51.440936    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:51.442631    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:51.937774    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:51.937792    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:51.937801    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:51.937807    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:51.940276    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:51.940668    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:51.940675    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:51.940681    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:51.940685    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:51.942296    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:51.942616    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:52.438854    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:52.438878    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:52.438892    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:52.438900    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:52.442008    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:52.442522    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:52.442530    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:52.442536    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:52.442540    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:52.444262    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:52.937664    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:52.937675    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:52.937684    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:52.937687    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:52.939825    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:52.940415    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:52.940422    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:52.940428    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:52.940432    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:52.942064    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:53.439277    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:53.439300    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:53.439309    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:53.439315    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:53.441705    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:53.442130    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:53.442138    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:53.442143    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:53.442146    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:53.443926    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:53.938741    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:53.938755    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:53.938785    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:53.938790    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:53.941015    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:53.941672    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:53.941680    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:53.941685    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:53.941689    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:53.943953    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:53.944413    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:54.438636    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:54.438654    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:54.438663    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:54.438668    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:54.441262    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:54.441677    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:54.441684    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:54.441690    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:54.441693    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:54.443309    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:54.938770    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:54.938788    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:54.938798    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:54.938802    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:54.941486    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:54.941877    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:54.941884    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:54.941890    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:54.941893    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:54.943590    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:55.438030    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:55.438049    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:55.438059    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:55.438064    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:55.440706    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:55.441272    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:55.441280    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:55.441289    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:55.441292    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:55.443295    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:55.938147    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:55.938203    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:55.938215    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:55.938222    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:55.940270    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:55.940729    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:55.940737    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:55.940742    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:55.940745    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:55.942359    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:56.437637    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:56.437654    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:56.437666    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:56.437671    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:56.440401    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:56.440900    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:56.440909    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:56.440916    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:56.440920    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:56.442737    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:56.443083    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:56.938496    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:56.938521    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:56.938533    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:56.938541    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:56.941967    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:56.942683    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:56.942691    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:56.942697    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:56.942707    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:56.944542    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:57.438317    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:57.438392    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:57.438405    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:57.438411    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:57.441323    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:57.441768    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:57.441776    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:57.441780    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:57.441793    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:57.443513    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:57.937977    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:57.937990    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:57.937996    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:57.938000    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:57.940155    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:57.940631    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:57.940639    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:57.940645    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:57.940650    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:57.942518    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:58.438589    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:58.438606    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:58.438612    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:58.438615    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:58.440808    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:58.441401    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:58.441409    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:58.441415    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:58.441423    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:58.443141    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:58.443478    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:58.938651    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:58.938670    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:58.938679    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:58.938683    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:58.940981    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:58.941414    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:58.941422    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:58.941428    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:58.941431    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:58.943207    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:59.437795    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:59.437809    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:59.437815    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:59.437819    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:59.440022    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:59.440439    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:59.440446    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:59.440452    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:59.440457    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:59.442209    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:59.938380    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:59.938393    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:59.938400    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:59.938403    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:59.940648    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:59.941030    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:59.941038    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:59.941044    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:59.941048    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:59.942631    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:00.437586    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:00.437607    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:00.437616    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:00.437621    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:00.440082    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:00.440574    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:00.440582    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:00.440588    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:00.440591    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:00.442224    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:00.939171    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:00.939189    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:00.939198    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:00.939203    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:00.941658    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:00.942057    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:00.942065    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:00.942071    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:00.942075    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:00.943872    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:00.944304    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:01.438420    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:01.438444    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:01.438462    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:01.438475    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:01.441885    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:01.442448    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:01.442456    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:01.442462    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:01.442473    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:01.444325    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:01.937741    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:01.937759    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:01.937769    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:01.937774    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:01.941004    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:01.941638    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:01.941645    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:01.941651    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:01.941655    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:01.943421    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:02.439464    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:02.439515    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:02.439539    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:02.439547    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:02.442788    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:02.443568    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:02.443575    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:02.443581    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:02.443584    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:02.445070    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:02.939355    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:02.939398    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:02.939423    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:02.939432    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:02.943288    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:02.943786    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:02.943793    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:02.943798    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:02.943808    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:02.945549    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:02.945918    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:03.437814    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:03.437833    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:03.437846    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:03.437852    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:03.440696    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:03.441473    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:03.441481    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:03.441487    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:03.441494    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:03.443180    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:03.938154    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:03.938171    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:03.938179    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:03.938185    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:03.940749    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:03.941323    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:03.941330    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:03.941336    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:03.941338    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:03.942986    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:04.438509    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:04.438533    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:04.438544    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:04.438552    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:04.441587    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:04.442338    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:04.442346    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:04.442351    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:04.442354    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:04.443865    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:04.939464    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:04.939517    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:04.939527    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:04.939530    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:04.941589    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:04.942132    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:04.942139    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:04.942144    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:04.942147    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:04.943787    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:05.437854    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:05.437866    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:05.437872    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:05.437875    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:05.439895    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:05.440295    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:05.440303    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:05.440308    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:05.440312    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:05.441766    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:05.442130    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:05.937813    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:05.937871    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:05.937882    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:05.937888    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:05.940367    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:05.940885    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:05.940892    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:05.940898    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:05.940902    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:05.942721    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:06.438966    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:06.438991    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:06.439007    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:06.439020    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:06.442137    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:06.442785    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:06.442793    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:06.442799    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:06.442802    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:06.444436    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:06.938695    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:06.938714    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:06.938723    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:06.938727    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:06.941327    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:06.941790    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:06.941798    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:06.941802    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:06.941805    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:06.943432    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:07.438469    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:07.438553    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:07.438567    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:07.438573    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:07.442157    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:07.442736    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:07.442744    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:07.442750    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:07.442754    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:07.444281    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:07.444696    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:07.937804    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:07.937815    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:07.937821    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:07.937823    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:07.939794    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:07.940418    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:07.940426    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:07.940432    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:07.940435    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:07.942179    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:08.437799    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:08.437814    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:08.437821    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:08.437827    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:08.440300    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:08.440760    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:08.440768    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:08.440773    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:08.440776    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:08.442402    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:08.938764    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:08.938789    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:08.938896    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:08.938909    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:08.942041    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:08.942737    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:08.942744    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:08.942751    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:08.942754    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:08.944691    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:09.437781    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:09.437795    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:09.437802    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:09.437807    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:09.440310    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:09.440716    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:09.440725    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:09.440731    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:09.440741    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:09.442571    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:09.937834    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:09.937847    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:09.937853    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:09.937856    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:09.939731    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:09.940144    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:09.940153    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:09.940159    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:09.940163    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:09.941982    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:09.942266    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:10.438403    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:10.438414    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:10.438421    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:10.438424    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:10.440749    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:10.441120    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:10.441127    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:10.441133    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:10.441138    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:10.442757    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:10.939169    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:10.939227    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:10.939238    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:10.939244    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:10.942004    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:10.942575    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:10.942582    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:10.942588    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:10.942591    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:10.944436    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:11.438251    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:11.438276    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:11.438353    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:11.438364    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:11.441421    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:11.441961    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:11.441969    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:11.441975    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:11.441979    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:11.446242    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:39:11.938022    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:11.938033    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:11.938040    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:11.938044    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:11.939924    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:11.940511    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:11.940519    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:11.940525    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:11.940528    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:11.942450    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:11.942833    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:12.439246    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:12.439269    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:12.439279    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:12.439285    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:12.442445    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:12.443020    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:12.443027    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:12.443033    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:12.443037    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:12.444778    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:12.939028    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:12.939059    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:12.939075    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:12.939144    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:12.941663    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:12.942169    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:12.942176    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:12.942182    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:12.942198    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:12.944174    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:13.439017    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:13.439030    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:13.439036    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:13.439039    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:13.441436    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:13.442003    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:13.442011    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:13.442017    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:13.442020    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:13.443715    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:13.939125    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:13.939138    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:13.939150    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:13.939154    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:13.941396    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:13.942124    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:13.942133    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:13.942138    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:13.942141    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:13.943860    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:13.944207    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:14.439525    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:14.439539    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:14.439545    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:14.439549    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:14.441636    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:14.442072    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:14.442080    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:14.442085    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:14.442088    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:14.443727    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:14.938392    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:14.938412    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:14.938425    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:14.938431    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:14.941839    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:14.942527    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:14.942535    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:14.942541    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:14.942556    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:14.944390    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:15.439124    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:15.439154    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:15.439236    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:15.439243    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:15.442572    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:15.443123    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:15.443133    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:15.443141    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:15.443145    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:15.445133    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:15.938789    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:15.938855    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:15.938870    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:15.938877    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:15.941774    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:15.942286    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:15.942294    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:15.942300    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:15.942304    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:15.944348    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:15.944660    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:16.439349    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:16.439368    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:16.439378    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:16.439383    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:16.441938    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:16.442524    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:16.442532    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:16.442537    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:16.442548    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:16.444186    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:16.938018    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:16.938067    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:16.938075    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:16.938081    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:16.940227    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:16.940771    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:16.940780    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:16.940785    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:16.940789    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:16.942609    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:17.438002    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:17.438028    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:17.438034    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:17.438038    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:17.440220    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:17.440724    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:17.440733    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:17.440739    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:17.440742    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:17.442604    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:17.938219    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:17.938237    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:17.938249    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:17.938255    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:17.941281    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:17.941690    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:17.941698    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:17.941703    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:17.941707    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:17.943715    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:18.439167    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:18.439186    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:18.439195    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:18.439200    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:18.441725    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:18.442096    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:18.442104    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:18.442109    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:18.442113    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:18.443738    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:18.444159    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:18.939393    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:18.939469    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:18.939479    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:18.939485    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:18.941987    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:18.942423    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:18.942431    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:18.942436    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:18.942439    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:18.944249    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:19.438795    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:19.438808    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.438814    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.438816    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.441023    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:19.441456    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:19.441464    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.441470    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.441475    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.443744    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:19.444095    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.444104    3636 pod_ready.go:81] duration metric: took 48.006189425s for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.444111    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.444150    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9dzd5
	I0717 10:39:19.444154    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.444160    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.444165    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.447092    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:19.447847    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:19.447856    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.447861    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.447865    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.449618    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:19.449899    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.449908    3636 pod_ready.go:81] duration metric: took 5.792129ms for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.449915    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.449950    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000
	I0717 10:39:19.449955    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.449961    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.449966    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.451887    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:19.452242    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:19.452249    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.452255    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.452259    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.455734    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:19.456038    3636 pod_ready.go:92] pod "etcd-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.456048    3636 pod_ready.go:81] duration metric: took 6.128452ms for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.456055    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.456091    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m02
	I0717 10:39:19.456096    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.456102    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.456104    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.459121    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:19.459474    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:19.459482    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.459487    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.459491    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.461049    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:19.461321    3636 pod_ready.go:92] pod "etcd-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.461330    3636 pod_ready.go:81] duration metric: took 5.269541ms for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.461337    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.461367    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m03
	I0717 10:39:19.461373    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.461378    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.461381    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.463280    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:19.463738    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:19.463745    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.463750    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.463754    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.466609    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:19.466864    3636 pod_ready.go:92] pod "etcd-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.466874    3636 pod_ready.go:81] duration metric: took 5.532002ms for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.466885    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.640514    3636 request.go:629] Waited for 173.589043ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:39:19.640593    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:39:19.640602    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.640610    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.640614    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.643241    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:19.839100    3636 request.go:629] Waited for 195.343311ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:19.839145    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:19.839152    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.839188    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.839194    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.845230    3636 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 10:39:19.845548    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.845558    3636 pod_ready.go:81] duration metric: took 378.657463ms for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.845565    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:20.040239    3636 request.go:629] Waited for 194.632219ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:39:20.040319    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:39:20.040328    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:20.040336    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:20.040342    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:20.042714    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:20.240297    3636 request.go:629] Waited for 196.995157ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:20.240378    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:20.240384    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:20.240390    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:20.240396    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:20.242369    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:20.242695    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:20.242704    3636 pod_ready.go:81] duration metric: took 397.124019ms for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:20.242711    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:20.439359    3636 request.go:629] Waited for 196.544114ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:39:20.439408    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:39:20.439416    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:20.439427    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:20.439434    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:20.442435    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:20.638955    3636 request.go:629] Waited for 196.048572ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:20.639046    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:20.639056    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:20.639068    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:20.639075    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:20.642008    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:20.642430    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:20.642442    3636 pod_ready.go:81] duration metric: took 399.714561ms for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:20.642451    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:20.838986    3636 request.go:629] Waited for 196.455933ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:39:20.839106    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:39:20.839119    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:20.839131    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:20.839141    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:20.842621    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:21.039118    3636 request.go:629] Waited for 195.900542ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:21.039165    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:21.039176    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:21.039188    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:21.039196    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:21.042149    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:21.042711    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:21.042741    3636 pod_ready.go:81] duration metric: took 400.268935ms for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:21.042748    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:21.238981    3636 request.go:629] Waited for 196.178207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:39:21.239040    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:39:21.239051    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:21.239063    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:21.239071    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:21.242170    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:21.440519    3636 request.go:629] Waited for 197.63517ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:21.440569    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:21.440581    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:21.440597    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:21.440606    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:21.443784    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:21.444203    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:21.444212    3636 pod_ready.go:81] duration metric: took 401.448672ms for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:21.444219    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:21.640166    3636 request.go:629] Waited for 195.890355ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:39:21.640224    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:39:21.640235    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:21.640246    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:21.640254    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:21.643178    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:21.840025    3636 request.go:629] Waited for 196.38625ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:21.840077    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:21.840087    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:21.840099    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:21.840107    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:21.842881    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:21.843340    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:21.843349    3636 pod_ready.go:81] duration metric: took 399.115148ms for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:21.843356    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:22.038929    3636 request.go:629] Waited for 195.527396ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:39:22.038980    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:39:22.038991    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:22.039000    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:22.039006    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:22.041797    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:22.239447    3636 request.go:629] Waited for 196.85315ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:39:22.239496    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:39:22.239504    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:22.239515    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:22.239525    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:22.242443    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:22.242932    3636 pod_ready.go:97] node "ha-572000-m04" hosting pod "kube-proxy-5wcph" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-572000-m04" has status "Ready":"Unknown"
	I0717 10:39:22.242948    3636 pod_ready.go:81] duration metric: took 399.575996ms for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	E0717 10:39:22.242956    3636 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-572000-m04" hosting pod "kube-proxy-5wcph" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-572000-m04" has status "Ready":"Unknown"
	I0717 10:39:22.242964    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:22.439269    3636 request.go:629] Waited for 196.255356ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:39:22.439393    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:39:22.439403    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:22.439414    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:22.439420    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:22.442456    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:22.640394    3636 request.go:629] Waited for 197.266214ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:22.640491    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:22.640500    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:22.640509    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:22.640514    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:22.643031    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:22.643471    3636 pod_ready.go:92] pod "kube-proxy-h7k9z" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:22.643480    3636 pod_ready.go:81] duration metric: took 400.50076ms for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:22.643487    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:22.839377    3636 request.go:629] Waited for 195.844443ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:39:22.839468    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:39:22.839477    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:22.839485    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:22.839491    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:22.841921    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:23.039004    3636 request.go:629] Waited for 196.604394ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:23.039109    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:23.039120    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:23.039131    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:23.039138    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:23.042022    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:23.042449    3636 pod_ready.go:92] pod "kube-proxy-hst7h" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:23.042462    3636 pod_ready.go:81] duration metric: took 398.959822ms for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:23.042480    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:23.240001    3636 request.go:629] Waited for 197.469314ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:39:23.240093    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:39:23.240110    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:23.240121    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:23.240131    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:23.243284    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:23.439300    3636 request.go:629] Waited for 195.300943ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:23.439332    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:23.439336    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:23.439343    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:23.439370    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:23.441287    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:23.441722    3636 pod_ready.go:92] pod "kube-proxy-v6jxh" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:23.441732    3636 pod_ready.go:81] duration metric: took 399.23495ms for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:23.441739    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:23.638943    3636 request.go:629] Waited for 197.165268ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:39:23.639000    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:39:23.639006    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:23.639012    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:23.639017    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:23.641044    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:23.840535    3636 request.go:629] Waited for 199.126882ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:23.840627    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:23.840639    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:23.840679    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:23.840691    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:23.843464    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:23.843963    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:23.843976    3636 pod_ready.go:81] duration metric: took 402.220047ms for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:23.843984    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:24.039540    3636 request.go:629] Waited for 195.50331ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:39:24.039598    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:39:24.039670    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.039685    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.039691    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.042477    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:24.239459    3636 request.go:629] Waited for 196.457492ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:24.239561    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:24.239573    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.239585    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.239591    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.242659    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:24.243312    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:24.243327    3636 pod_ready.go:81] duration metric: took 399.325407ms for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:24.243336    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:24.439080    3636 request.go:629] Waited for 195.673891ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:39:24.439191    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:39:24.439202    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.439213    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.439223    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.443262    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:39:24.639182    3636 request.go:629] Waited for 195.517919ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:24.639292    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:24.639303    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.639316    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.639324    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.642200    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:24.642657    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:24.642666    3636 pod_ready.go:81] duration metric: took 399.31371ms for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:24.642674    3636 pod_ready.go:38] duration metric: took 53.219035328s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:39:24.642686    3636 api_server.go:52] waiting for apiserver process to appear ...
	I0717 10:39:24.642749    3636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:39:24.655291    3636 api_server.go:72] duration metric: took 53.415271815s to wait for apiserver process to appear ...
	I0717 10:39:24.655303    3636 api_server.go:88] waiting for apiserver healthz status ...
	I0717 10:39:24.655313    3636 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0717 10:39:24.659504    3636 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0717 10:39:24.659539    3636 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0717 10:39:24.659544    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.659549    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.659552    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.660035    3636 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:39:24.660129    3636 api_server.go:141] control plane version: v1.30.2
	I0717 10:39:24.660138    3636 api_server.go:131] duration metric: took 4.830633ms to wait for apiserver health ...
	I0717 10:39:24.660142    3636 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 10:39:24.840282    3636 request.go:629] Waited for 180.099076ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:39:24.840353    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:39:24.840361    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.840369    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.840373    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.845121    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:39:24.850038    3636 system_pods.go:59] 26 kube-system pods found
	I0717 10:39:24.850051    3636 system_pods.go:61] "coredns-7db6d8ff4d-2phrp" [e76a91cf-8a14-454c-baee-7c0128645e0e] Running
	I0717 10:39:24.850054    3636 system_pods.go:61] "coredns-7db6d8ff4d-9dzd5" [f9a7cff1-56a3-4600-ae2f-a951dda10753] Running
	I0717 10:39:24.850057    3636 system_pods.go:61] "etcd-ha-572000" [2d7b717e-d404-4c63-afe9-799de6711964] Running
	I0717 10:39:24.850060    3636 system_pods.go:61] "etcd-ha-572000-m02" [8abc7662-c159-4953-9aba-11a75b4e7d65] Running
	I0717 10:39:24.850062    3636 system_pods.go:61] "etcd-ha-572000-m03" [78629908-d362-4a27-933f-2b929867c22f] Running
	I0717 10:39:24.850065    3636 system_pods.go:61] "kindnet-5xsrp" [0b84aa4d-7452-47f6-8052-f063e2e8ef7b] Running
	I0717 10:39:24.850067    3636 system_pods.go:61] "kindnet-72zfp" [0cc735c4-08b3-499a-b9e2-8c38377f371b] Running
	I0717 10:39:24.850069    3636 system_pods.go:61] "kindnet-g2m92" [be3d84cf-f9e8-426c-b02a-49eb7eed9d6a] Running
	I0717 10:39:24.850071    3636 system_pods.go:61] "kindnet-t85bv" [8649cc70-8ce8-4caf-938e-bd253fa5b7ae] Running
	I0717 10:39:24.850074    3636 system_pods.go:61] "kube-apiserver-ha-572000" [6409591d-7a50-414b-be63-44d5ddb0b0e0] Running
	I0717 10:39:24.850076    3636 system_pods.go:61] "kube-apiserver-ha-572000-m02" [d658459c-67f9-4798-87f2-c651d830af35] Running
	I0717 10:39:24.850078    3636 system_pods.go:61] "kube-apiserver-ha-572000-m03" [ac1c02f1-f023-4ad2-99bc-2657fb9d50d7] Running
	I0717 10:39:24.850081    3636 system_pods.go:61] "kube-controller-manager-ha-572000" [075c4d23-04bd-404a-822d-ae7d326a68ac] Running
	I0717 10:39:24.850084    3636 system_pods.go:61] "kube-controller-manager-ha-572000-m02" [3014bdb3-bb86-48d5-a045-2a8dab026bd8] Running
	I0717 10:39:24.850086    3636 system_pods.go:61] "kube-controller-manager-ha-572000-m03" [3b76e220-3a5b-4dac-8f69-6b380799d2ef] Running
	I0717 10:39:24.850088    3636 system_pods.go:61] "kube-proxy-5wcph" [731f5b57-131e-4e97-b47a-036b8d4edbcd] Running
	I0717 10:39:24.850105    3636 system_pods.go:61] "kube-proxy-h7k9z" [cf687084-b40d-43b6-9476-8c60c5f37d1d] Running
	I0717 10:39:24.850110    3636 system_pods.go:61] "kube-proxy-hst7h" [2b7c8d2c-3d71-4357-9429-1d8438c446f5] Running
	I0717 10:39:24.850113    3636 system_pods.go:61] "kube-proxy-v6jxh" [3f952fc8-747f-49da-b400-5212dba538d8] Running
	I0717 10:39:24.850116    3636 system_pods.go:61] "kube-scheduler-ha-572000" [da36a422-ba49-4cd6-8f47-9be7c43657be] Running
	I0717 10:39:24.850118    3636 system_pods.go:61] "kube-scheduler-ha-572000-m02" [13e289a2-8d3c-478f-9220-1d114dd5bf62] Running
	I0717 10:39:24.850121    3636 system_pods.go:61] "kube-scheduler-ha-572000-m03" [428716db-8096-4b57-9f90-e706d5683852] Running
	I0717 10:39:24.850124    3636 system_pods.go:61] "kube-vip-ha-572000" [57c4f54d-e859-4429-90dd-0402a9e73727] Running
	I0717 10:39:24.850127    3636 system_pods.go:61] "kube-vip-ha-572000-m02" [fa57b8d2-bc41-42af-9d9a-1b68f882e6fe] Running
	I0717 10:39:24.850129    3636 system_pods.go:61] "kube-vip-ha-572000-m03" [a35c5007-dfe3-4b58-b16c-d568b8f9c1c2] Running
	I0717 10:39:24.850133    3636 system_pods.go:61] "storage-provisioner" [1801216d-6529-4049-b874-1577132fc03f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 10:39:24.850139    3636 system_pods.go:74] duration metric: took 189.987862ms to wait for pod list to return data ...
	I0717 10:39:24.850145    3636 default_sa.go:34] waiting for default service account to be created ...
	I0717 10:39:25.040731    3636 request.go:629] Waited for 190.528349ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0717 10:39:25.040830    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0717 10:39:25.040841    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:25.040852    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:25.040860    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:25.044018    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:25.044088    3636 default_sa.go:45] found service account: "default"
	I0717 10:39:25.044097    3636 default_sa.go:55] duration metric: took 193.941803ms for default service account to be created ...
	I0717 10:39:25.044103    3636 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 10:39:25.240503    3636 request.go:629] Waited for 196.351718ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:39:25.240543    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:39:25.240548    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:25.240554    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:25.240583    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:25.244975    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:39:25.249908    3636 system_pods.go:86] 26 kube-system pods found
	I0717 10:39:25.249919    3636 system_pods.go:89] "coredns-7db6d8ff4d-2phrp" [e76a91cf-8a14-454c-baee-7c0128645e0e] Running
	I0717 10:39:25.249923    3636 system_pods.go:89] "coredns-7db6d8ff4d-9dzd5" [f9a7cff1-56a3-4600-ae2f-a951dda10753] Running
	I0717 10:39:25.249940    3636 system_pods.go:89] "etcd-ha-572000" [2d7b717e-d404-4c63-afe9-799de6711964] Running
	I0717 10:39:25.249944    3636 system_pods.go:89] "etcd-ha-572000-m02" [8abc7662-c159-4953-9aba-11a75b4e7d65] Running
	I0717 10:39:25.249948    3636 system_pods.go:89] "etcd-ha-572000-m03" [78629908-d362-4a27-933f-2b929867c22f] Running
	I0717 10:39:25.249951    3636 system_pods.go:89] "kindnet-5xsrp" [0b84aa4d-7452-47f6-8052-f063e2e8ef7b] Running
	I0717 10:39:25.249955    3636 system_pods.go:89] "kindnet-72zfp" [0cc735c4-08b3-499a-b9e2-8c38377f371b] Running
	I0717 10:39:25.249959    3636 system_pods.go:89] "kindnet-g2m92" [be3d84cf-f9e8-426c-b02a-49eb7eed9d6a] Running
	I0717 10:39:25.249962    3636 system_pods.go:89] "kindnet-t85bv" [8649cc70-8ce8-4caf-938e-bd253fa5b7ae] Running
	I0717 10:39:25.249966    3636 system_pods.go:89] "kube-apiserver-ha-572000" [6409591d-7a50-414b-be63-44d5ddb0b0e0] Running
	I0717 10:39:25.249969    3636 system_pods.go:89] "kube-apiserver-ha-572000-m02" [d658459c-67f9-4798-87f2-c651d830af35] Running
	I0717 10:39:25.249973    3636 system_pods.go:89] "kube-apiserver-ha-572000-m03" [ac1c02f1-f023-4ad2-99bc-2657fb9d50d7] Running
	I0717 10:39:25.249976    3636 system_pods.go:89] "kube-controller-manager-ha-572000" [075c4d23-04bd-404a-822d-ae7d326a68ac] Running
	I0717 10:39:25.249979    3636 system_pods.go:89] "kube-controller-manager-ha-572000-m02" [3014bdb3-bb86-48d5-a045-2a8dab026bd8] Running
	I0717 10:39:25.249983    3636 system_pods.go:89] "kube-controller-manager-ha-572000-m03" [3b76e220-3a5b-4dac-8f69-6b380799d2ef] Running
	I0717 10:39:25.249987    3636 system_pods.go:89] "kube-proxy-5wcph" [731f5b57-131e-4e97-b47a-036b8d4edbcd] Running
	I0717 10:39:25.249990    3636 system_pods.go:89] "kube-proxy-h7k9z" [cf687084-b40d-43b6-9476-8c60c5f37d1d] Running
	I0717 10:39:25.249994    3636 system_pods.go:89] "kube-proxy-hst7h" [2b7c8d2c-3d71-4357-9429-1d8438c446f5] Running
	I0717 10:39:25.249997    3636 system_pods.go:89] "kube-proxy-v6jxh" [3f952fc8-747f-49da-b400-5212dba538d8] Running
	I0717 10:39:25.250001    3636 system_pods.go:89] "kube-scheduler-ha-572000" [da36a422-ba49-4cd6-8f47-9be7c43657be] Running
	I0717 10:39:25.250005    3636 system_pods.go:89] "kube-scheduler-ha-572000-m02" [13e289a2-8d3c-478f-9220-1d114dd5bf62] Running
	I0717 10:39:25.250008    3636 system_pods.go:89] "kube-scheduler-ha-572000-m03" [428716db-8096-4b57-9f90-e706d5683852] Running
	I0717 10:39:25.250012    3636 system_pods.go:89] "kube-vip-ha-572000" [57c4f54d-e859-4429-90dd-0402a9e73727] Running
	I0717 10:39:25.250019    3636 system_pods.go:89] "kube-vip-ha-572000-m02" [fa57b8d2-bc41-42af-9d9a-1b68f882e6fe] Running
	I0717 10:39:25.250026    3636 system_pods.go:89] "kube-vip-ha-572000-m03" [a35c5007-dfe3-4b58-b16c-d568b8f9c1c2] Running
	I0717 10:39:25.250031    3636 system_pods.go:89] "storage-provisioner" [1801216d-6529-4049-b874-1577132fc03f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 10:39:25.250037    3636 system_pods.go:126] duration metric: took 205.924043ms to wait for k8s-apps to be running ...
	I0717 10:39:25.250043    3636 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 10:39:25.250097    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:39:25.260730    3636 system_svc.go:56] duration metric: took 10.680441ms WaitForService to wait for kubelet
	I0717 10:39:25.260752    3636 kubeadm.go:582] duration metric: took 54.020711767s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:39:25.260767    3636 node_conditions.go:102] verifying NodePressure condition ...
	I0717 10:39:25.440260    3636 request.go:629] Waited for 179.444294ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0717 10:39:25.440305    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0717 10:39:25.440313    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:25.440326    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:25.440335    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:25.443664    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:25.444820    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:39:25.444830    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:39:25.444839    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:39:25.444842    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:39:25.444845    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:39:25.444848    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:39:25.444851    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:39:25.444854    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:39:25.444857    3636 node_conditions.go:105] duration metric: took 184.081224ms to run NodePressure ...
	I0717 10:39:25.444866    3636 start.go:241] waiting for startup goroutines ...
	I0717 10:39:25.444881    3636 start.go:255] writing updated cluster config ...
	I0717 10:39:25.466841    3636 out.go:177] 
	I0717 10:39:25.488444    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:39:25.488557    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:39:25.511165    3636 out.go:177] * Starting "ha-572000-m04" worker node in "ha-572000" cluster
	I0717 10:39:25.553049    3636 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:39:25.553078    3636 cache.go:56] Caching tarball of preloaded images
	I0717 10:39:25.553293    3636 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:39:25.553311    3636 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:39:25.553441    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:39:25.554263    3636 start.go:360] acquireMachinesLock for ha-572000-m04: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:39:25.554357    3636 start.go:364] duration metric: took 71.034µs to acquireMachinesLock for "ha-572000-m04"
	I0717 10:39:25.554380    3636 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:39:25.554388    3636 fix.go:54] fixHost starting: m04
	I0717 10:39:25.554780    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:39:25.554805    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:39:25.564043    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52015
	I0717 10:39:25.564385    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:39:25.564752    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:39:25.564769    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:39:25.564963    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:39:25.565075    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:39:25.565158    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetState
	I0717 10:39:25.565257    3636 main.go:141] libmachine: (ha-572000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:39:25.565368    3636 main.go:141] libmachine: (ha-572000-m04) DBG | hyperkit pid from json: 3096
	I0717 10:39:25.566303    3636 main.go:141] libmachine: (ha-572000-m04) DBG | hyperkit pid 3096 missing from process table
	I0717 10:39:25.566325    3636 fix.go:112] recreateIfNeeded on ha-572000-m04: state=Stopped err=<nil>
	I0717 10:39:25.566334    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	W0717 10:39:25.566413    3636 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:39:25.587318    3636 out.go:177] * Restarting existing hyperkit VM for "ha-572000-m04" ...
	I0717 10:39:25.629121    3636 main.go:141] libmachine: (ha-572000-m04) Calling .Start
	I0717 10:39:25.629280    3636 main.go:141] libmachine: (ha-572000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:39:25.629323    3636 main.go:141] libmachine: (ha-572000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/hyperkit.pid
	I0717 10:39:25.629373    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Using UUID d62b35de-5f9d-4091-a1f9-ae55052b3d93
	I0717 10:39:25.659758    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Generated MAC 1e:37:45:6a:f1:7f
	I0717 10:39:25.659780    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:39:25.659921    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d62b35de-5f9d-4091-a1f9-ae55052b3d93", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002879b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:39:25.659979    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d62b35de-5f9d-4091-a1f9-ae55052b3d93", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002879b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:39:25.660027    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "d62b35de-5f9d-4091-a1f9-ae55052b3d93", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/ha-572000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machine
s/ha-572000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:39:25.660072    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U d62b35de-5f9d-4091-a1f9-ae55052b3d93 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/ha-572000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:39:25.660086    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:39:25.661465    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: Pid is 3683
	I0717 10:39:25.661986    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Attempt 0
	I0717 10:39:25.661995    3636 main.go:141] libmachine: (ha-572000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:39:25.662068    3636 main.go:141] libmachine: (ha-572000-m04) DBG | hyperkit pid from json: 3683
	I0717 10:39:25.664876    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Searching for 1e:37:45:6a:f1:7f in /var/db/dhcpd_leases ...
	I0717 10:39:25.665000    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:39:25.665028    3636 main.go:141] libmachine: (ha-572000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:d3:62:da:43:cf ID:1,6e:d3:62:da:43:cf Lease:0x6699530d}
	I0717 10:39:25.665090    3636 main.go:141] libmachine: (ha-572000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x669952ec}
	I0717 10:39:25.665098    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetConfigRaw
	I0717 10:39:25.665107    3636 main.go:141] libmachine: (ha-572000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669952da}
	I0717 10:39:25.665121    3636 main.go:141] libmachine: (ha-572000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:37:45:6a:f1:7f ID:1,1e:37:45:6a:f1:7f Lease:0x6698001b}
	I0717 10:39:25.665133    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Found match: 1e:37:45:6a:f1:7f
	I0717 10:39:25.665155    3636 main.go:141] libmachine: (ha-572000-m04) DBG | IP: 192.169.0.8
	I0717 10:39:25.665871    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetIP
	I0717 10:39:25.666075    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:39:25.666480    3636 machine.go:94] provisionDockerMachine start ...
	I0717 10:39:25.666492    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:39:25.666622    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:39:25.666758    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:39:25.666855    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:39:25.666997    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:39:25.667100    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:39:25.667218    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:39:25.667397    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:39:25.667404    3636 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:39:25.669640    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:39:25.678044    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:39:25.679048    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:39:25.679102    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:39:25.679117    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:39:25.679129    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:39:26.061153    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:39:26.061169    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:39:26.176025    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:39:26.176085    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:39:26.176109    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:39:26.176141    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:39:26.176817    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:39:26.176827    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:39:31.459017    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:31 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:39:31.459116    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:31 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:39:31.459128    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:31 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:39:31.482911    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:31 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:40:00.729304    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:40:00.729320    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetMachineName
	I0717 10:40:00.729447    3636 buildroot.go:166] provisioning hostname "ha-572000-m04"
	I0717 10:40:00.729459    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetMachineName
	I0717 10:40:00.729548    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:00.729650    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:00.729752    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:00.729829    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:00.729922    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:00.730060    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:00.730229    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:00.730238    3636 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000-m04 && echo "ha-572000-m04" | sudo tee /etc/hostname
	I0717 10:40:00.792250    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000-m04
	
	I0717 10:40:00.792267    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:00.792395    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:00.792496    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:00.792601    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:00.792686    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:00.792813    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:00.792953    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:00.792965    3636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:40:00.851570    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:40:00.851592    3636 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:40:00.851608    3636 buildroot.go:174] setting up certificates
	I0717 10:40:00.851614    3636 provision.go:84] configureAuth start
	I0717 10:40:00.851621    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetMachineName
	I0717 10:40:00.851754    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetIP
	I0717 10:40:00.851843    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:00.851935    3636 provision.go:143] copyHostCerts
	I0717 10:40:00.851965    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:40:00.852026    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:40:00.852032    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:40:00.852183    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:40:00.852421    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:40:00.852465    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:40:00.852470    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:40:00.852549    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:40:00.852695    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:40:00.852734    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:40:00.852739    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:40:00.852814    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:40:00.852963    3636 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000-m04 san=[127.0.0.1 192.169.0.8 ha-572000-m04 localhost minikube]
	I0717 10:40:01.012731    3636 provision.go:177] copyRemoteCerts
	I0717 10:40:01.012781    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:40:01.012796    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:01.012945    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:01.013036    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.013118    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:01.013205    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	I0717 10:40:01.045440    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:40:01.045513    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 10:40:01.065877    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:40:01.065952    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:40:01.086341    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:40:01.086417    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:40:01.107237    3636 provision.go:87] duration metric: took 255.607467ms to configureAuth
	I0717 10:40:01.107252    3636 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:40:01.107441    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:40:01.107470    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:01.107602    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:01.107691    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:01.107775    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.107862    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.107936    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:01.108052    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:01.108176    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:01.108184    3636 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:40:01.159812    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:40:01.159826    3636 buildroot.go:70] root file system type: tmpfs
	I0717 10:40:01.159906    3636 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:40:01.159918    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:01.160045    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:01.160133    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.160218    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.160312    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:01.160436    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:01.160588    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:01.160638    3636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:40:01.222986    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:40:01.223013    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:01.223158    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:01.223263    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.223339    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.223425    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:01.223557    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:01.223705    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:01.223717    3636 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:40:02.793231    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:40:02.793247    3636 machine.go:97] duration metric: took 37.125816173s to provisionDockerMachine
	I0717 10:40:02.793256    3636 start.go:293] postStartSetup for "ha-572000-m04" (driver="hyperkit")
	I0717 10:40:02.793263    3636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:40:02.793273    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:02.793461    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:40:02.793475    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:02.793570    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:02.793662    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:02.793746    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:02.793821    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	I0717 10:40:02.826174    3636 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:40:02.829517    3636 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:40:02.829527    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:40:02.829627    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:40:02.829814    3636 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:40:02.829820    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:40:02.830025    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:40:02.837723    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:40:02.858109    3636 start.go:296] duration metric: took 64.843134ms for postStartSetup
	I0717 10:40:02.858164    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:02.858343    3636 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:40:02.858357    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:02.858452    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:02.858535    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:02.858625    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:02.858709    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	I0717 10:40:02.891466    3636 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:40:02.891526    3636 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:40:02.924508    3636 fix.go:56] duration metric: took 37.369170253s for fixHost
	I0717 10:40:02.924533    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:02.924664    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:02.924753    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:02.924844    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:02.924927    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:02.925043    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:02.925181    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:02.925189    3636 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:40:02.979156    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721238002.907586801
	
	I0717 10:40:02.979168    3636 fix.go:216] guest clock: 1721238002.907586801
	I0717 10:40:02.979174    3636 fix.go:229] Guest: 2024-07-17 10:40:02.907586801 -0700 PDT Remote: 2024-07-17 10:40:02.924523 -0700 PDT m=+161.794729692 (delta=-16.936199ms)
	I0717 10:40:02.979185    3636 fix.go:200] guest clock delta is within tolerance: -16.936199ms
	I0717 10:40:02.979189    3636 start.go:83] releasing machines lock for "ha-572000-m04", held for 37.423872596s
	I0717 10:40:02.979207    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:02.979341    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetIP
	I0717 10:40:03.002677    3636 out.go:177] * Found network options:
	I0717 10:40:03.023433    3636 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0717 10:40:03.044600    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:40:03.044630    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:40:03.044645    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:40:03.044662    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:03.045380    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:03.045584    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:03.045691    3636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:40:03.045739    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	W0717 10:40:03.045803    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:40:03.045829    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:40:03.045847    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:40:03.045916    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:03.045932    3636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:40:03.045950    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:03.046116    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:03.046197    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:03.046277    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:03.046336    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:03.046416    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	I0717 10:40:03.046472    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:03.046583    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	W0717 10:40:03.078338    3636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:40:03.078404    3636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:40:03.127460    3636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:40:03.127478    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:40:03.127562    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:40:03.143174    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:40:03.152039    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:40:03.160575    3636 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:40:03.160636    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:40:03.169267    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:40:03.178061    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:40:03.186799    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:40:03.195713    3636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:40:03.205361    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:40:03.214887    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:40:03.223632    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:40:03.232306    3636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:40:03.240303    3636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:40:03.248146    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:03.349118    3636 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:40:03.368632    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:40:03.368697    3636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:40:03.382935    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:40:03.394904    3636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:40:03.408677    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:40:03.424538    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:40:03.436679    3636 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:40:03.457267    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:40:03.468621    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:40:03.484458    3636 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:40:03.487477    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:40:03.495866    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:40:03.509467    3636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:40:03.610005    3636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:40:03.711300    3636 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:40:03.711330    3636 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:40:03.725314    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:03.818685    3636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:40:06.069148    3636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.250387117s)
	I0717 10:40:06.069225    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:40:06.080064    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:40:06.090634    3636 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:40:06.182522    3636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:40:06.285041    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:06.397211    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:40:06.410586    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:40:06.421941    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:06.525211    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:40:06.593566    3636 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:40:06.593658    3636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:40:06.598237    3636 start.go:563] Will wait 60s for crictl version
	I0717 10:40:06.598298    3636 ssh_runner.go:195] Run: which crictl
	I0717 10:40:06.601369    3636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:40:06.630287    3636 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:40:06.630357    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:40:06.648217    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:40:06.713331    3636 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:40:06.734501    3636 out.go:177]   - env NO_PROXY=192.169.0.5
	I0717 10:40:06.755443    3636 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0717 10:40:06.776545    3636 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	I0717 10:40:06.797619    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetIP
	I0717 10:40:06.797849    3636 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:40:06.801369    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:40:06.811681    3636 mustload.go:65] Loading cluster: ha-572000
	I0717 10:40:06.811867    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:40:06.812096    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:40:06.812120    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:40:06.821106    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52038
	I0717 10:40:06.821460    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:40:06.821823    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:40:06.821839    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:40:06.822045    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:40:06.822158    3636 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:40:06.822237    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:40:06.822325    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3650
	I0717 10:40:06.823304    3636 host.go:66] Checking if "ha-572000" exists ...
	I0717 10:40:06.823558    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:40:06.823583    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:40:06.832052    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52040
	I0717 10:40:06.832422    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:40:06.832722    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:40:06.832733    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:40:06.832924    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:40:06.833068    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:40:06.833173    3636 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.8
	I0717 10:40:06.833178    3636 certs.go:194] generating shared ca certs ...
	I0717 10:40:06.833187    3636 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:40:06.833369    3636 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:40:06.833445    3636 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:40:06.833455    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:40:06.833477    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:40:06.833496    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:40:06.833513    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:40:06.833602    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:40:06.833654    3636 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:40:06.833664    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:40:06.833699    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:40:06.833731    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:40:06.833765    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:40:06.833830    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:40:06.833866    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:40:06.833895    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:40:06.833914    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:40:06.833943    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:40:06.854528    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:40:06.874473    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:40:06.894419    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:40:06.914655    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:40:06.934481    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:40:06.953938    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:40:06.973423    3636 ssh_runner.go:195] Run: openssl version
	I0717 10:40:06.977846    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:40:06.987226    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:40:06.990594    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:40:06.990633    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:40:06.994910    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:40:07.004316    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:40:07.013700    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:40:07.017207    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:40:07.017252    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:40:07.021661    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:40:07.030891    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:40:07.040013    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:40:07.043424    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:40:07.043460    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:40:07.048023    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:40:07.057292    3636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:40:07.060465    3636 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 10:40:07.060498    3636 kubeadm.go:934] updating node {m04 192.169.0.8 0 v1.30.2 docker false true} ...
	I0717 10:40:07.060568    3636 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:40:07.060612    3636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:40:07.068828    3636 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:40:07.068888    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0717 10:40:07.077989    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0717 10:40:07.091753    3636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:40:07.105613    3636 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0717 10:40:07.108527    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:40:07.118827    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:07.218618    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:40:07.232580    3636 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0717 10:40:07.232780    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:40:07.270354    3636 out.go:177] * Verifying Kubernetes components...
	I0717 10:40:07.343786    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:07.486955    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:40:07.502599    3636 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:40:07.502930    3636 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa0a8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 10:40:07.502990    3636 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0717 10:40:07.503236    3636 node_ready.go:35] waiting up to 6m0s for node "ha-572000-m04" to be "Ready" ...
	I0717 10:40:07.503290    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:40:07.503296    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.503303    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.503305    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.507147    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:07.507598    3636 node_ready.go:49] node "ha-572000-m04" has status "Ready":"True"
	I0717 10:40:07.507619    3636 node_ready.go:38] duration metric: took 4.370479ms for node "ha-572000-m04" to be "Ready" ...
	I0717 10:40:07.507631    3636 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:40:07.507695    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:40:07.507705    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.507714    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.507718    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.517761    3636 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0717 10:40:07.525740    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.525796    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:40:07.525804    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.525810    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.525815    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.527956    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:07.528370    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:07.528378    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.528384    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.528387    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.530521    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:07.530888    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:07.530899    3636 pod_ready.go:81] duration metric: took 5.142557ms for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.530907    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.530969    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9dzd5
	I0717 10:40:07.530978    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.530985    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.530990    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.533172    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:07.533578    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:07.533586    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.533592    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.533595    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.535152    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.535453    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:07.535462    3636 pod_ready.go:81] duration metric: took 4.549454ms for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.535469    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.535504    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000
	I0717 10:40:07.535509    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.535515    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.535519    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.537042    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.537410    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:07.537417    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.537423    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.537426    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.538975    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.539323    3636 pod_ready.go:92] pod "etcd-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:07.539331    3636 pod_ready.go:81] duration metric: took 3.856623ms for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.539337    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.539378    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m02
	I0717 10:40:07.539383    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.539389    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.539393    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.541081    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.541459    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:07.541467    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.541473    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.541476    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.542992    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.543383    3636 pod_ready.go:92] pod "etcd-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:07.543391    3636 pod_ready.go:81] duration metric: took 4.050033ms for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.543397    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.703505    3636 request.go:629] Waited for 160.066521ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m03
	I0717 10:40:07.703540    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m03
	I0717 10:40:07.703545    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.703551    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.703556    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.705548    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.903510    3636 request.go:629] Waited for 197.511686ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:07.903551    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:07.903556    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.903562    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.903601    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.905857    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:07.906157    3636 pod_ready.go:92] pod "etcd-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:07.906168    3636 pod_ready.go:81] duration metric: took 362.756768ms for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.906180    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:08.103966    3636 request.go:629] Waited for 197.743139ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:40:08.104021    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:40:08.104030    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:08.104037    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:08.104046    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:08.106066    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:08.303534    3636 request.go:629] Waited for 196.774341ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:08.303599    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:08.303671    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:08.303686    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:08.303697    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:08.306313    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:08.306837    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:08.306847    3636 pod_ready.go:81] duration metric: took 400.65093ms for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:08.306854    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:08.503920    3636 request.go:629] Waited for 197.018157ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:40:08.503964    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:40:08.503984    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:08.503990    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:08.503995    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:08.506056    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:08.703436    3636 request.go:629] Waited for 196.948288ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:08.703494    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:08.703500    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:08.703506    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:08.703511    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:08.705852    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:08.706163    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:08.706173    3636 pod_ready.go:81] duration metric: took 399.30321ms for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:08.706179    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:08.903771    3636 request.go:629] Waited for 197.50006ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:40:08.903806    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:40:08.903813    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:08.903820    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:08.903824    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:08.906399    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:09.104084    3636 request.go:629] Waited for 197.163497ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:09.104171    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:09.104176    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:09.104182    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:09.104187    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:09.106361    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:09.106707    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:09.106718    3636 pod_ready.go:81] duration metric: took 400.52413ms for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:09.106726    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:09.304052    3636 request.go:629] Waited for 197.283261ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:40:09.304088    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:40:09.304093    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:09.304130    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:09.304135    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:09.306083    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:09.504106    3636 request.go:629] Waited for 197.645757ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:09.504208    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:09.504220    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:09.504232    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:09.504240    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:09.511286    3636 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 10:40:09.511696    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:09.511709    3636 pod_ready.go:81] duration metric: took 404.967221ms for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:09.511716    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:09.703585    3636 request.go:629] Waited for 191.795231ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:40:09.703642    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:40:09.703653    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:09.703665    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:09.703671    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:09.706720    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:09.904070    3636 request.go:629] Waited for 196.771647ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:09.904118    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:09.904125    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:09.904134    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:09.904140    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:09.906439    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:09.906766    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:09.906776    3636 pod_ready.go:81] duration metric: took 395.046014ms for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:09.906787    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:10.104935    3636 request.go:629] Waited for 198.017235ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:40:10.105019    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:40:10.105031    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:10.105061    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:10.105068    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:10.108223    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:10.304013    3636 request.go:629] Waited for 195.251924ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:10.304073    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:10.304086    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:10.304097    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:10.304106    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:10.307327    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:10.307882    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:10.307891    3636 pod_ready.go:81] duration metric: took 401.08706ms for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:10.307899    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:10.504739    3636 request.go:629] Waited for 196.801571ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:40:10.504780    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:40:10.504821    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:10.504827    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:10.504831    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:10.506960    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:10.703733    3636 request.go:629] Waited for 196.095597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:40:10.703831    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:40:10.703840    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:10.703866    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:10.703875    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:10.706696    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:10.707101    3636 pod_ready.go:92] pod "kube-proxy-5wcph" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:10.707111    3636 pod_ready.go:81] duration metric: took 399.196595ms for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:10.707118    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:10.903773    3636 request.go:629] Waited for 196.61026ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:40:10.903910    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:40:10.903927    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:10.903945    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:10.903955    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:10.906117    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:11.104247    3636 request.go:629] Waited for 197.64653ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:11.104330    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:11.104339    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:11.104351    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:11.104362    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:11.107473    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:11.107930    3636 pod_ready.go:92] pod "kube-proxy-h7k9z" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:11.107945    3636 pod_ready.go:81] duration metric: took 400.810357ms for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:11.107954    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:11.304083    3636 request.go:629] Waited for 196.074281ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:40:11.304132    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:40:11.304139    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:11.304147    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:11.304151    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:11.306391    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:11.503460    3636 request.go:629] Waited for 196.558235ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:11.503507    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:11.503513    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:11.503519    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:11.503523    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:11.505457    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:11.505774    3636 pod_ready.go:92] pod "kube-proxy-hst7h" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:11.505785    3636 pod_ready.go:81] duration metric: took 397.815014ms for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:11.505792    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:11.704821    3636 request.go:629] Waited for 198.981688ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:40:11.704918    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:40:11.704927    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:11.704933    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:11.704936    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:11.707262    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:11.903612    3636 request.go:629] Waited for 195.874248ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:11.903682    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:11.903689    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:11.903696    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:11.903700    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:11.905982    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:11.906348    3636 pod_ready.go:92] pod "kube-proxy-v6jxh" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:11.906359    3636 pod_ready.go:81] duration metric: took 400.551047ms for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:11.906369    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:12.103492    3636 request.go:629] Waited for 197.075685ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:40:12.103568    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:40:12.103574    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:12.103580    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:12.103585    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:12.105506    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:12.303814    3636 request.go:629] Waited for 197.930746ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:12.303844    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:12.303850    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:12.303867    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:12.303874    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:12.305845    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:12.306164    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:12.306174    3636 pod_ready.go:81] duration metric: took 399.787712ms for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:12.306181    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:12.503949    3636 request.go:629] Waited for 197.718801ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:40:12.504068    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:40:12.504079    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:12.504087    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:12.504093    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:12.506372    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:12.704852    3636 request.go:629] Waited for 198.155745ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:12.704924    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:12.704932    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:12.704940    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:12.704944    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:12.707307    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:12.707616    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:12.707626    3636 pod_ready.go:81] duration metric: took 401.429815ms for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:12.707633    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:12.903728    3636 request.go:629] Waited for 196.035029ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:40:12.903828    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:40:12.903836    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:12.903842    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:12.903845    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:12.906224    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:13.103515    3636 request.go:629] Waited for 196.951957ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:13.103588    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:13.103593    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:13.103599    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:13.103603    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:13.105622    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:13.106020    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:13.106029    3636 pod_ready.go:81] duration metric: took 398.380033ms for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:13.106046    3636 pod_ready.go:38] duration metric: took 5.59825813s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:40:13.106061    3636 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 10:40:13.106113    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:40:13.116872    3636 system_svc.go:56] duration metric: took 10.807598ms WaitForService to wait for kubelet
	I0717 10:40:13.116887    3636 kubeadm.go:582] duration metric: took 5.884130758s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:40:13.116904    3636 node_conditions.go:102] verifying NodePressure condition ...
	I0717 10:40:13.303772    3636 request.go:629] Waited for 186.81691ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0717 10:40:13.303803    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0717 10:40:13.303807    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:13.303841    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:13.303846    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:13.306895    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:13.307714    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:40:13.307729    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:40:13.307740    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:40:13.307744    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:40:13.307748    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:40:13.307751    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:40:13.307757    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:40:13.307761    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:40:13.307764    3636 node_conditions.go:105] duration metric: took 190.851869ms to run NodePressure ...
	I0717 10:40:13.307772    3636 start.go:241] waiting for startup goroutines ...
	I0717 10:40:13.307786    3636 start.go:255] writing updated cluster config ...
	I0717 10:40:13.308139    3636 ssh_runner.go:195] Run: rm -f paused
	I0717 10:40:13.349733    3636 start.go:600] kubectl: 1.29.2, cluster: 1.30.2 (minor skew: 1)
	I0717 10:40:13.371543    3636 out.go:177] * Done! kubectl is now configured to use "ha-572000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 17 17:38:43 ha-572000 dockerd[1183]: time="2024-07-17T17:38:43.319450280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.340195606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.340255461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.340333620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.340397061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.341315078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.341404694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.341501856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.343515271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.343612113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.343637500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.343972230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.346166794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:45 ha-572000 dockerd[1183]: time="2024-07-17T17:38:45.310104278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:38:45 ha-572000 dockerd[1183]: time="2024-07-17T17:38:45.310177463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:38:45 ha-572000 dockerd[1183]: time="2024-07-17T17:38:45.310195349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:45 ha-572000 dockerd[1183]: time="2024-07-17T17:38:45.310377303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:39:13 ha-572000 dockerd[1176]: time="2024-07-17T17:39:13.526781737Z" level=info msg="ignoring event" container=a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 17:39:13 ha-572000 dockerd[1183]: time="2024-07-17T17:39:13.527422614Z" level=info msg="shim disconnected" id=a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403 namespace=moby
	Jul 17 17:39:13 ha-572000 dockerd[1183]: time="2024-07-17T17:39:13.527577585Z" level=warning msg="cleaning up after shim disconnected" id=a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403 namespace=moby
	Jul 17 17:39:13 ha-572000 dockerd[1183]: time="2024-07-17T17:39:13.527671021Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 17 17:40:35 ha-572000 dockerd[1183]: time="2024-07-17T17:40:35.340652733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:40:35 ha-572000 dockerd[1183]: time="2024-07-17T17:40:35.340734956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:40:35 ha-572000 dockerd[1183]: time="2024-07-17T17:40:35.340749170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:40:35 ha-572000 dockerd[1183]: time="2024-07-17T17:40:35.341115504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f904e7fbc3286       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   be6e24303245d       storage-provisioner
	0544a7b38aa20       cbb01a7bd410d                                                                                         2 minutes ago        Running             coredns                   1                   211b5a6515354       coredns-7db6d8ff4d-9dzd5
	2f15e40a181ae       53c535741fb44                                                                                         2 minutes ago        Running             kube-proxy                1                   4aab8735c2c04       kube-proxy-hst7h
	a5d6b6937bc80       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   24dc28c9171d4       busybox-fc5497c4f-5r4wl
	90d12ecf2a207       5cc3abe5717db                                                                                         2 minutes ago        Running             kindnet-cni               1                   c4ad8ae388e4c       kindnet-t85bv
	a82cf6255e5a9       6e38f40d628db                                                                                         3 minutes ago        Exited              storage-provisioner       1                   be6e24303245d       storage-provisioner
	22dbe2e88f6f6       cbb01a7bd410d                                                                                         3 minutes ago        Running             coredns                   1                   ebfbe4a086eb8       coredns-7db6d8ff4d-2phrp
	d0c5e4f0005b0       e874818b3caac                                                                                         3 minutes ago        Running             kube-controller-manager   6                   3143df977771c       kube-controller-manager-ha-572000
	2988c5a570cb1       38af8ddebf499                                                                                         3 minutes ago        Running             kube-vip                  1                   bb35c323d1311       kube-vip-ha-572000
	b589feb3cd968       7820c83aa1394                                                                                         3 minutes ago        Running             kube-scheduler            2                   1f36c956df9c2       kube-scheduler-ha-572000
	c4604d37a9454       3861cfcd7c04c                                                                                         3 minutes ago        Running             etcd                      3                   73d23719d576c       etcd-ha-572000
	490b99a8cd7e0       56ce0fd9fb532                                                                                         3 minutes ago        Running             kube-apiserver            6                   43743c72743dc       kube-apiserver-ha-572000
	caed8fc7c24d9       e874818b3caac                                                                                         3 minutes ago        Exited              kube-controller-manager   5                   3143df977771c       kube-controller-manager-ha-572000
	cd333393aa057       56ce0fd9fb532                                                                                         4 minutes ago        Exited              kube-apiserver            5                   6d7eb0e874999       kube-apiserver-ha-572000
	b6b4ce34842d6       3861cfcd7c04c                                                                                         4 minutes ago        Exited              etcd                      2                   986ceb5a6f870       etcd-ha-572000
	138bf6784d59c       38af8ddebf499                                                                                         8 minutes ago        Exited              kube-vip                  0                   df04438a4c5cc       kube-vip-ha-572000
	a53f8fcdf5d97       7820c83aa1394                                                                                         8 minutes ago        Exited              kube-scheduler            1                   bfd880612991e       kube-scheduler-ha-572000
	e1a5eb1bed550       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago       Exited              busybox                   0                   29ab413131af2       busybox-fc5497c4f-5r4wl
	bb44d784bb7ab       cbb01a7bd410d                                                                                         13 minutes ago       Exited              coredns                   0                   b8c622f08395f       coredns-7db6d8ff4d-2phrp
	7b275812468c9       cbb01a7bd410d                                                                                         13 minutes ago       Exited              coredns                   0                   2588bd7c40c23       coredns-7db6d8ff4d-9dzd5
	6e40e1427ab20       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              13 minutes ago       Exited              kindnet-cni               0                   32dc836c0a2df       kindnet-t85bv
	2aeed19835352       53c535741fb44                                                                                         14 minutes ago       Exited              kube-proxy                0                   f688e08d591be       kube-proxy-hst7h
	
	
	==> coredns [0544a7b38aa2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47730 - 44649 "HINFO IN 7657991150461714427.6847867729784937660. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009507113s
	
	
	==> coredns [22dbe2e88f6f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50584 - 51756 "HINFO IN 3888167032918365436.646455749640363721. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.007934252s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1469986290]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 17:38:43.514) (total time: 30002ms):
	Trace[1469986290]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:39:13.516)
	Trace[1469986290]: [30.002760682s] [30.002760682s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1457962466]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 17:38:43.515) (total time: 30001ms):
	Trace[1457962466]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:39:13.516)
	Trace[1457962466]: [30.001713432s] [30.001713432s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[94258701]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 17:38:43.514) (total time: 30003ms):
	Trace[94258701]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:39:13.516)
	Trace[94258701]: [30.003582814s] [30.003582814s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [7b275812468c] <==
	[INFO] 10.244.0.4:49035 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001628883s
	[INFO] 10.244.0.4:59665 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081818s
	[INFO] 10.244.0.4:52274 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077378s
	[INFO] 10.244.1.2:47113 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103907s
	[INFO] 10.244.2.2:57796 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060561s
	[INFO] 10.244.2.2:40339 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000425199s
	[INFO] 10.244.2.2:38339 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010468s
	[INFO] 10.244.2.2:50750 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070547s
	[INFO] 10.244.0.4:57426 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000045039s
	[INFO] 10.244.0.4:44283 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031195s
	[INFO] 10.244.1.2:56783 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117359s
	[INFO] 10.244.1.2:34315 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061358s
	[INFO] 10.244.1.2:55792 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062223s
	[INFO] 10.244.2.2:35768 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086426s
	[INFO] 10.244.2.2:42473 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102022s
	[INFO] 10.244.0.4:53524 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000087177s
	[INFO] 10.244.0.4:35224 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079092s
	[INFO] 10.244.0.4:53020 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075585s
	[INFO] 10.244.1.2:58074 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138293s
	[INFO] 10.244.1.2:40567 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000088337s
	[INFO] 10.244.1.2:46301 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000065328s
	[INFO] 10.244.1.2:59898 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075157s
	[INFO] 10.244.2.2:48754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000081888s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bb44d784bb7a] <==
	[INFO] 10.244.0.4:54052 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001019206s
	[INFO] 10.244.0.4:45412 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077905s
	[INFO] 10.244.0.4:37924 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159382s
	[INFO] 10.244.1.2:45862 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108196s
	[INFO] 10.244.1.2:47464 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000616154s
	[INFO] 10.244.1.2:53720 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059338s
	[INFO] 10.244.1.2:49774 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123053s
	[INFO] 10.244.1.2:54472 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000097287s
	[INFO] 10.244.1.2:52054 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064095s
	[INFO] 10.244.1.2:54428 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064142s
	[INFO] 10.244.2.2:53088 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109393s
	[INFO] 10.244.2.2:49993 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000114279s
	[INFO] 10.244.2.2:42708 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063661s
	[INFO] 10.244.2.2:57150 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000043434s
	[INFO] 10.244.0.4:45987 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112397s
	[INFO] 10.244.0.4:44116 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057797s
	[INFO] 10.244.1.2:37635 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105428s
	[INFO] 10.244.2.2:52081 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131248s
	[INFO] 10.244.2.2:54912 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087858s
	[INFO] 10.244.0.4:53981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079367s
	[INFO] 10.244.2.2:45850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152036s
	[INFO] 10.244.2.2:36004 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000223s
	[INFO] 10.244.2.2:54459 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000128834s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-572000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-572000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-572000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T10_27_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:27:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-572000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:41:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:38:16 +0000   Wed, 17 Jul 2024 17:27:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:38:16 +0000   Wed, 17 Jul 2024 17:27:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:38:16 +0000   Wed, 17 Jul 2024 17:27:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:38:16 +0000   Wed, 17 Jul 2024 17:27:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-572000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc4828ff3a4b410d87d0a2c48b8c546d
	  System UUID:                5f264258-0000-0000-9840-7856c1bd4173
	  Boot ID:                    2568bff2-eded-45b6-850c-4c0e9d36f966
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5r4wl              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-2phrp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-9dzd5             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-572000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-t85bv                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-572000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-572000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-hst7h                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-572000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-572000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m15s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m59s                kube-proxy       
	  Normal  Starting                 14m                  kube-proxy       
	  Normal  NodeHasSufficientPID     14m                  kubelet          Node ha-572000 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                  kubelet          Node ha-572000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                  kubelet          Node ha-572000 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           14m                  node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  NodeReady                13m                  kubelet          Node ha-572000 status is now: NodeReady
	  Normal  RegisteredNode           12m                  node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  RegisteredNode           9m37s                node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  Starting                 4m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m5s (x8 over 4m5s)  kubelet          Node ha-572000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s (x8 over 4m5s)  kubelet          Node ha-572000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s (x7 over 4m5s)  kubelet          Node ha-572000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m24s                node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  RegisteredNode           13s                  node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	
	
	Name:               ha-572000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-572000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-572000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T10_28_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:28:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-572000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:41:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:38:08 +0000   Wed, 17 Jul 2024 17:28:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:38:08 +0000   Wed, 17 Jul 2024 17:28:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:38:08 +0000   Wed, 17 Jul 2024 17:28:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:38:08 +0000   Wed, 17 Jul 2024 17:28:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-572000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 21a94638d6914aaeb48a6d7a895c9b99
	  System UUID:                b5da4916-0000-0000-aec8-9a96c30c8c05
	  Boot ID:                    d3f575b3-f9f0-45ee-bee7-6209fb3d26a4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9sdw5                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-572000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-g2m92                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-572000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-572000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-v6jxh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-572000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-572000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m29s                  kube-proxy       
	  Normal   Starting                 9m50s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-572000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-572000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-572000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Warning  Rebooted                 9m53s                  kubelet          Node ha-572000-m02 has been rebooted, boot id: 7661c0d0-1379-4b0e-b101-3961fae1a207
	  Normal   NodeHasSufficientPID     9m53s                  kubelet          Node ha-572000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    9m53s                  kubelet          Node ha-572000-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 9m53s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m53s                  kubelet          Node ha-572000-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           9m37s                  node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   Starting                 3m47s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m46s (x8 over 3m47s)  kubelet          Node ha-572000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m46s (x8 over 3m47s)  kubelet          Node ha-572000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m46s (x7 over 3m47s)  kubelet          Node ha-572000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m24s                  node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   RegisteredNode           3m3s                   node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   RegisteredNode           2m57s                  node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   RegisteredNode           13s                    node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	
	
	Name:               ha-572000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-572000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-572000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T10_29_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:29:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-572000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:41:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:38:31 +0000   Wed, 17 Jul 2024 17:29:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:38:31 +0000   Wed, 17 Jul 2024 17:29:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:38:31 +0000   Wed, 17 Jul 2024 17:29:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:38:31 +0000   Wed, 17 Jul 2024 17:30:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-572000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 be52acddd53148cc8c17d6c21c17abf3
	  System UUID:                50644be4-0000-0000-8d75-15b09204e5f5
	  Boot ID:                    f2b4f1ba-89fc-4cd2-a163-0306f2bae7bb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jhz2d                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-572000-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-72zfp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-572000-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-572000-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-h7k9z                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-572000-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-572000-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 3m9s               kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-572000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-572000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-572000-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           9m37s              node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           3m24s              node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   NodeHasSufficientMemory  3m13s              kubelet          Node ha-572000-m03 status is now: NodeHasSufficientMemory
	  Normal   Starting                 3m13s              kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3m13s              kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    3m13s              kubelet          Node ha-572000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m13s              kubelet          Node ha-572000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 3m13s              kubelet          Node ha-572000-m03 has been rebooted, boot id: f2b4f1ba-89fc-4cd2-a163-0306f2bae7bb
	  Normal   RegisteredNode           3m3s               node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           2m57s              node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           13s                node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	
	
	Name:               ha-572000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-572000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-572000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T10_30_40_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:30:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-572000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:41:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:40:07 +0000   Wed, 17 Jul 2024 17:40:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:40:07 +0000   Wed, 17 Jul 2024 17:40:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:40:07 +0000   Wed, 17 Jul 2024 17:40:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:40:07 +0000   Wed, 17 Jul 2024 17:40:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-572000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a064c491460940e4967dc27f529a5ea6
	  System UUID:                d62b4091-0000-0000-a1f9-ae55052b3d93
	  Boot ID:                    9c875bb7-4ccf-49df-b662-ce64a8634436
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5xsrp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-proxy-5wcph    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 95s                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet          Node ha-572000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet          Node ha-572000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet          Node ha-572000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-572000-m04 status is now: NodeReady
	  Normal   RegisteredNode           9m37s              node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   RegisteredNode           3m24s              node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   RegisteredNode           3m3s               node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   RegisteredNode           2m57s              node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   NodeNotReady             2m44s              node-controller  Node ha-572000-m04 status is now: NodeNotReady
	  Normal   Starting                 97s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  97s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  97s (x2 over 97s)  kubelet          Node ha-572000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    97s (x2 over 97s)  kubelet          Node ha-572000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     97s (x2 over 97s)  kubelet          Node ha-572000-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 97s                kubelet          Node ha-572000-m04 has been rebooted, boot id: 9c875bb7-4ccf-49df-b662-ce64a8634436
	  Normal   NodeReady                97s                kubelet          Node ha-572000-m04 status is now: NodeReady
	  Normal   RegisteredNode           13s                node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	
	
	Name:               ha-572000-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-572000-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-572000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T10_41_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:41:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-572000-m05
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:41:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:41:33 +0000   Wed, 17 Jul 2024 17:41:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:41:33 +0000   Wed, 17 Jul 2024 17:41:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:41:33 +0000   Wed, 17 Jul 2024 17:41:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:41:33 +0000   Wed, 17 Jul 2024 17:41:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.9
	  Hostname:    ha-572000-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 767b66bba81748e696eb2e462f5f7060
	  System UUID:                56c3461c-0000-0000-b26f-1b2c0afb03b4
	  Boot ID:                    39c772b6-19a0-4d6d-b5e1-f52a71880d81
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-572000-m05                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28s
	  kube-system                 kindnet-dpf85                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      30s
	  kube-system                 kube-apiserver-ha-572000-m05             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 kube-controller-manager-ha-572000-m05    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 kube-proxy-64xjf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-scheduler-ha-572000-m05             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 kube-vip-ha-572000-m05                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  NodeHasSufficientMemory  30s (x8 over 31s)  kubelet          Node ha-572000-m05 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s (x8 over 31s)  kubelet          Node ha-572000-m05 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s (x7 over 31s)  kubelet          Node ha-572000-m05 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  30s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           29s                node-controller  Node ha-572000-m05 event: Registered Node ha-572000-m05 in Controller
	  Normal  RegisteredNode           28s                node-controller  Node ha-572000-m05 event: Registered Node ha-572000-m05 in Controller
	  Normal  RegisteredNode           27s                node-controller  Node ha-572000-m05 event: Registered Node ha-572000-m05 in Controller
	  Normal  RegisteredNode           13s                node-controller  Node ha-572000-m05 event: Registered Node ha-572000-m05 in Controller
	
	
	==> dmesg <==
	[  +0.035701] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007982] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.369068] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006691] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.635959] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.223787] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.844039] systemd-fstab-generator[468]: Ignoring "noauto" option for root device
	[  +0.100018] systemd-fstab-generator[480]: Ignoring "noauto" option for root device
	[  +1.895052] systemd-fstab-generator[1104]: Ignoring "noauto" option for root device
	[  +0.053692] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.194931] systemd-fstab-generator[1141]: Ignoring "noauto" option for root device
	[  +0.116874] systemd-fstab-generator[1153]: Ignoring "noauto" option for root device
	[  +0.104796] systemd-fstab-generator[1167]: Ignoring "noauto" option for root device
	[  +2.435008] systemd-fstab-generator[1384]: Ignoring "noauto" option for root device
	[  +0.114297] systemd-fstab-generator[1396]: Ignoring "noauto" option for root device
	[  +0.106280] systemd-fstab-generator[1408]: Ignoring "noauto" option for root device
	[  +0.119247] systemd-fstab-generator[1423]: Ignoring "noauto" option for root device
	[  +0.407183] systemd-fstab-generator[1582]: Ignoring "noauto" option for root device
	[  +6.782353] kauditd_printk_skb: 234 callbacks suppressed
	[Jul17 17:38] kauditd_printk_skb: 40 callbacks suppressed
	[ +35.726193] kauditd_printk_skb: 25 callbacks suppressed
	[Jul17 17:39] kauditd_printk_skb: 50 callbacks suppressed
	
	
	==> etcd [b6b4ce34842d] <==
	{"level":"info","ts":"2024-07-17T17:37:06.183089Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-07-17T17:37:07.625159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:07.625381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:07.625508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:07.625789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:07.626021Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:09.125694Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:09.125798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:09.125845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:09.12599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:09.12619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:10.625744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:10.62582Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:10.625858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:10.625915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:10.625982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"warn","ts":"2024-07-17T17:37:11.167194Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f6a6d94fe6ea5bc8","rtt":"0s"}
	{"level":"warn","ts":"2024-07-17T17:37:11.167486Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f6a6d94fe6ea5bc8","rtt":"0s"}
	{"level":"warn","ts":"2024-07-17T17:37:11.185338Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1d3f36ee75516151","rtt":"0s"}
	{"level":"warn","ts":"2024-07-17T17:37:11.185403Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1d3f36ee75516151","rtt":"0s"}
	{"level":"info","ts":"2024-07-17T17:37:12.128113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:12.128317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:12.128469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:12.128888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:12.129376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	
	
	==> etcd [c4604d37a945] <==
	{"level":"warn","ts":"2024-07-17T17:41:14.324016Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.169.0.9:44238","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-07-17T17:41:14.334681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(2107463548431065425 13314548521573537860 17773131916664003528) learners=(16006101081352431403)"}
	{"level":"info","ts":"2024-07-17T17:41:14.335365Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","added-peer-id":"de2118212901e72b","added-peer-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"info","ts":"2024-07-17T17:41:14.335913Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:14.336217Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:14.337301Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:14.338676Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:14.34151Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:14.342462Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"de2118212901e72b","remote-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"info","ts":"2024-07-17T17:41:14.341846Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:14.34175Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"de2118212901e72b"}
	{"level":"warn","ts":"2024-07-17T17:41:14.37406Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.169.0.9:44278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-07-17T17:41:14.393112Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"de2118212901e72b","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-07-17T17:41:14.886078Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"de2118212901e72b","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-07-17T17:41:15.391819Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"de2118212901e72b","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-07-17T17:41:15.441542Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:15.453858Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:15.456472Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:15.488031Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"de2118212901e72b","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-17T17:41:15.488346Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:15.488889Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"de2118212901e72b","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-17T17:41:15.488928Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:16.392768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(2107463548431065425 13314548521573537860 16006101081352431403 17773131916664003528)"}
	{"level":"info","ts":"2024-07-17T17:41:16.393348Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-07-17T17:41:16.393877Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"de2118212901e72b"}
	
	
	==> kernel <==
	 17:41:45 up 4 min,  0 users,  load average: 0.09, 0.08, 0.03
	Linux ha-572000 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6e40e1427ab2] <==
	I0717 17:31:56.892269       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:06.898213       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:06.898253       1 main.go:303] handling current node
	I0717 17:32:06.898264       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:06.898269       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:06.898416       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:06.898443       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:06.898526       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:06.898555       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:16.896377       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:16.896415       1 main.go:303] handling current node
	I0717 17:32:16.896426       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:16.896432       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:16.896606       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:16.896636       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:16.896674       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:16.896699       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:26.896557       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:26.896622       1 main.go:303] handling current node
	I0717 17:32:26.896678       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:26.896718       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:26.896938       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:26.897017       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:26.897158       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:26.897880       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [90d12ecf2a20] <==
	I0717 17:41:15.427937       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:41:15.427967       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:41:15.428296       1 main.go:299] Handling node with IPs: map[192.169.0.9:{}]
	I0717 17:41:15.428449       1 main.go:326] Node ha-572000-m05 has CIDR [10.244.4.0/24] 
	I0717 17:41:15.428833       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 192.169.0.9 Flags: [] Table: 0} 
	I0717 17:41:25.426752       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:41:25.427215       1 main.go:303] handling current node
	I0717 17:41:25.427433       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:41:25.427706       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:41:25.428059       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:41:25.428248       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:41:25.428422       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:41:25.428533       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:41:25.428768       1 main.go:299] Handling node with IPs: map[192.169.0.9:{}]
	I0717 17:41:25.428834       1 main.go:326] Node ha-572000-m05 has CIDR [10.244.4.0/24] 
	I0717 17:41:35.427111       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:41:35.427150       1 main.go:303] handling current node
	I0717 17:41:35.427162       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:41:35.427167       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:41:35.427382       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:41:35.427413       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:41:35.427460       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:41:35.427464       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:41:35.427495       1 main.go:299] Handling node with IPs: map[192.169.0.9:{}]
	I0717 17:41:35.427499       1 main.go:326] Node ha-572000-m05 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [490b99a8cd7e] <==
	I0717 17:38:06.692598       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0717 17:38:06.695172       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 17:38:06.753691       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 17:38:06.754495       1 policy_source.go:224] refreshing policies
	I0717 17:38:06.761461       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 17:38:06.775946       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 17:38:06.777937       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 17:38:06.777967       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 17:38:06.785861       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 17:38:06.785861       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 17:38:06.789965       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 17:38:06.785881       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 17:38:06.790098       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 17:38:06.790136       1 aggregator.go:165] initial CRD sync complete...
	I0717 17:38:06.790141       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 17:38:06.790145       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 17:38:06.790148       1 cache.go:39] Caches are synced for autoregister controller
	W0717 17:38:06.822673       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.7]
	I0717 17:38:06.824170       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 17:38:06.847080       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 17:38:06.894480       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0717 17:38:06.899931       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0717 17:38:07.685599       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0717 17:38:07.910228       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.7]
	W0717 17:38:27.915985       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.6]
	
	
	==> kube-apiserver [cd333393aa05] <==
	I0717 17:37:11.795742       1 options.go:221] external host was not specified, using 192.169.0.5
	I0717 17:37:11.796641       1 server.go:148] Version: v1.30.2
	I0717 17:37:11.796774       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:37:12.098000       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0717 17:37:12.100463       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 17:37:12.102906       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0717 17:37:12.102927       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0717 17:37:12.103040       1 instance.go:299] Using reconciler: lease
	W0717 17:37:13.058091       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:59336->127.0.0.1:2379: read: connection reset by peer"
	W0717 17:37:13.058287       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:59310->127.0.0.1:2379: read: connection reset by peer"
	W0717 17:37:13.058569       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:59320->127.0.0.1:2379: read: connection reset by peer"
	
	
	==> kube-controller-manager [caed8fc7c24d] <==
	I0717 17:37:47.127601       1 serving.go:380] Generated self-signed cert in-memory
	I0717 17:37:47.646900       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0717 17:37:47.646935       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:37:47.649809       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0717 17:37:47.649838       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 17:37:47.650220       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 17:37:47.649847       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0717 17:38:07.655360       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-n
amespaces-controller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [d0c5e4f0005b] <==
	I0717 17:38:41.432004       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0717 17:38:41.511531       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 17:38:41.518940       1 shared_informer.go:320] Caches are synced for daemon sets
	I0717 17:38:41.541830       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 17:38:41.550619       1 shared_informer.go:320] Caches are synced for stateful set
	I0717 17:38:41.975157       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 17:38:41.982462       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 17:38:41.982520       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 17:38:43.635302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.818µs"
	I0717 17:38:44.733712       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.810534ms"
	I0717 17:38:44.734043       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.445µs"
	I0717 17:38:45.721419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.76µs"
	I0717 17:38:45.768611       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-9v69m EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-9v69m\": the object has been modified; please apply your changes to the latest version and try again"
	I0717 17:38:45.771754       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"7c540b68-a08e-44ac-9c69-ea596263c8eb", APIVersion:"v1", ResourceVersion:"260", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-9v69m EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-9v69m": the object has been modified; please apply your changes to the latest version and try again
	I0717 17:38:45.781131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.861246ms"
	I0717 17:38:45.781831       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.528µs"
	I0717 17:39:19.551280       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.494894ms"
	I0717 17:39:19.551568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="81.124µs"
	I0717 17:40:07.684329       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-572000-m04"
	E0717 17:41:14.082163       1 certificate_controller.go:146] Sync csr-qbxdh failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-qbxdh": the object has been modified; please apply your changes to the latest version and try again
	I0717 17:41:14.172914       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-572000-m04"
	I0717 17:41:14.175471       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-572000-m05\" does not exist"
	I0717 17:41:14.194311       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-572000-m05" podCIDRs=["10.244.4.0/24"]
	I0717 17:41:16.399973       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-572000-m05"
	I0717 17:41:33.445418       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-572000-m04"
	
	
	==> kube-proxy [2aeed1983535] <==
	I0717 17:27:43.315695       1 server_linux.go:69] "Using iptables proxy"
	I0717 17:27:43.322673       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0717 17:27:43.354011       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:27:43.354032       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:27:43.354044       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:27:43.355997       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:27:43.356216       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:27:43.356225       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:27:43.356874       1 config.go:192] "Starting service config controller"
	I0717 17:27:43.356903       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:27:43.356921       1 config.go:319] "Starting node config controller"
	I0717 17:27:43.356943       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:27:43.357077       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:27:43.357144       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:27:43.457513       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 17:27:43.457607       1 shared_informer.go:320] Caches are synced for node config
	I0717 17:27:43.457639       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [2f15e40a181a] <==
	I0717 17:38:44.762819       1 server_linux.go:69] "Using iptables proxy"
	I0717 17:38:44.783856       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0717 17:38:44.830838       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:38:44.830870       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:38:44.830884       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:38:44.834309       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:38:44.834864       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:38:44.834894       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:38:44.836964       1 config.go:192] "Starting service config controller"
	I0717 17:38:44.837593       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:38:44.837672       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:38:44.837678       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:38:44.839841       1 config.go:319] "Starting node config controller"
	I0717 17:38:44.839870       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:38:44.938549       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 17:38:44.938751       1 shared_informer.go:320] Caches are synced for service config
	I0717 17:38:44.940510       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a53f8fcdf5d9] <==
	E0717 17:36:41.264926       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:36:42.998657       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:36:42.998862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:36:43.326673       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:36:43.327166       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:36:45.184656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:36:45.185412       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:36:52.182490       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:36:52.182723       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:37:00.423142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:00.423274       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:37:01.259659       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:01.260400       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:37:02.377758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:02.378082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:37:08.932628       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:08.932761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:37:09.428412       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:09.428505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:13.065507       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0717 17:37:13.067197       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0717 17:37:13.067371       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0717 17:37:13.067559       1 shared_informer.go:316] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 17:37:13.067604       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0717 17:37:13.067950       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b589feb3cd96] <==
	I0717 17:38:06.820740       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 17:41:14.239866       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vlsjj\": pod kindnet-vlsjj is already assigned to node \"ha-572000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-vlsjj" node="ha-572000-m05"
	E0717 17:41:14.239947       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bf06a8b7-5c37-4959-8a51-d0be5c50ba7a(kube-system/kindnet-vlsjj) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-vlsjj"
	E0717 17:41:14.239965       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vlsjj\": pod kindnet-vlsjj is already assigned to node \"ha-572000-m05\"" pod="kube-system/kindnet-vlsjj"
	I0717 17:41:14.239980       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-vlsjj" node="ha-572000-m05"
	E0717 17:41:14.239465       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rdwjx\": pod kube-proxy-rdwjx is already assigned to node \"ha-572000-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rdwjx" node="ha-572000-m05"
	E0717 17:41:14.242527       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 949ed56b-85a2-4195-852b-78dc4bc5b578(kube-system/kube-proxy-rdwjx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-rdwjx"
	E0717 17:41:14.249558       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rdwjx\": pod kube-proxy-rdwjx is already assigned to node \"ha-572000-m05\"" pod="kube-system/kube-proxy-rdwjx"
	I0717 17:41:14.249616       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rdwjx" node="ha-572000-m05"
	E0717 17:41:14.250217       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-p29wl\": pod kindnet-p29wl is already assigned to node \"ha-572000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-p29wl" node="ha-572000-m05"
	E0717 17:41:14.252306       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-pzgmx\": pod kube-proxy-pzgmx is already assigned to node \"ha-572000-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-pzgmx" node="ha-572000-m05"
	E0717 17:41:14.252572       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bcc2c71d-a309-4576-9860-6418f0a2067d(kube-system/kube-proxy-pzgmx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-pzgmx"
	E0717 17:41:14.252721       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-pzgmx\": pod kube-proxy-pzgmx is already assigned to node \"ha-572000-m05\"" pod="kube-system/kube-proxy-pzgmx"
	I0717 17:41:14.252854       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-pzgmx" node="ha-572000-m05"
	E0717 17:41:14.253203       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dpf85\": pod kindnet-dpf85 is already assigned to node \"ha-572000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-dpf85" node="ha-572000-m05"
	E0717 17:41:14.253408       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod eca95df0-0ef0-44a6-b5de-bc7d469e569b(kube-system/kindnet-dpf85) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-dpf85"
	E0717 17:41:14.253544       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dpf85\": pod kindnet-dpf85 is already assigned to node \"ha-572000-m05\"" pod="kube-system/kindnet-dpf85"
	I0717 17:41:14.253603       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dpf85" node="ha-572000-m05"
	E0717 17:41:14.250279       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 141e26ae-90e6-472e-8b26-fd21a5c88874(kube-system/kindnet-p29wl) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-p29wl"
	E0717 17:41:14.255824       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-p29wl\": pod kindnet-p29wl is already assigned to node \"ha-572000-m05\"" pod="kube-system/kindnet-p29wl"
	I0717 17:41:14.256117       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-p29wl" node="ha-572000-m05"
	E0717 17:41:14.270789       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-64xjf\": pod kube-proxy-64xjf is already assigned to node \"ha-572000-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-64xjf" node="ha-572000-m05"
	E0717 17:41:14.270845       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 40af5b55-9491-4221-9191-3d411d01d3a8(kube-system/kube-proxy-64xjf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-64xjf"
	E0717 17:41:14.270858       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-64xjf\": pod kube-proxy-64xjf is already assigned to node \"ha-572000-m05\"" pod="kube-system/kube-proxy-64xjf"
	I0717 17:41:14.270885       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-64xjf" node="ha-572000-m05"
	
	
	==> kubelet <==
	Jul 17 17:39:28 ha-572000 kubelet[1589]: E0717 17:39:28.248343    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	Jul 17 17:39:39 ha-572000 kubelet[1589]: E0717 17:39:39.270524    1589 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:39:39 ha-572000 kubelet[1589]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:39:39 ha-572000 kubelet[1589]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:39:39 ha-572000 kubelet[1589]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:39:39 ha-572000 kubelet[1589]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:39:43 ha-572000 kubelet[1589]: I0717 17:39:43.248697    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:39:43 ha-572000 kubelet[1589]: E0717 17:39:43.249374    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	Jul 17 17:39:54 ha-572000 kubelet[1589]: I0717 17:39:54.247534    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:39:54 ha-572000 kubelet[1589]: E0717 17:39:54.248369    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	Jul 17 17:40:07 ha-572000 kubelet[1589]: I0717 17:40:07.247771    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:40:07 ha-572000 kubelet[1589]: E0717 17:40:07.248147    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	Jul 17 17:40:22 ha-572000 kubelet[1589]: I0717 17:40:22.247319    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:40:22 ha-572000 kubelet[1589]: E0717 17:40:22.247457    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	Jul 17 17:40:35 ha-572000 kubelet[1589]: I0717 17:40:35.248729    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:40:39 ha-572000 kubelet[1589]: E0717 17:40:39.271602    1589 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:40:39 ha-572000 kubelet[1589]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:40:39 ha-572000 kubelet[1589]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:40:39 ha-572000 kubelet[1589]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:40:39 ha-572000 kubelet[1589]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:41:39 ha-572000 kubelet[1589]: E0717 17:41:39.273266    1589 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:41:39 ha-572000 kubelet[1589]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:41:39 ha-572000 kubelet[1589]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:41:39 ha-572000 kubelet[1589]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:41:39 ha-572000 kubelet[1589]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-572000 -n ha-572000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-572000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (81.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:304: expected profile "ha-572000" in json of 'profile list' to include 4 nodes but have 5 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-572000\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-572000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-572000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVersion\":
\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.169.0.9\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"i
ngress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608
000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-572000 -n ha-572000
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-572000 logs -n 25: (3.513158671s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m04 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m03_ha-572000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-572000 cp testdata/cp-test.txt                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3497274976/001/cp-test_ha-572000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000:/home/docker/cp-test_ha-572000-m04_ha-572000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000 sudo cat                                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m02:/home/docker/cp-test_ha-572000-m04_ha-572000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m02 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m03:/home/docker/cp-test_ha-572000-m04_ha-572000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | ha-572000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-572000 ssh -n ha-572000-m03 sudo cat                                                                                      | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | /home/docker/cp-test_ha-572000-m04_ha-572000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-572000 node stop m02 -v=7                                                                                                 | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-572000 node start m02 -v=7                                                                                                | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:32 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-572000 -v=7                                                                                                       | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-572000 -v=7                                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT | 17 Jul 24 10:32 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-572000 --wait=true -v=7                                                                                                | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-572000                                                                                                            | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:34 PDT |                     |
	| node    | ha-572000 node delete m03 -v=7                                                                                               | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:34 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-572000 stop -v=7                                                                                                          | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:34 PDT | 17 Jul 24 10:37 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-572000 --wait=true                                                                                                     | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:37 PDT | 17 Jul 24 10:40 PDT |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	| node    | add -p ha-572000                                                                                                             | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:40 PDT | 17 Jul 24 10:41 PDT |
	|         | --control-plane -v=7                                                                                                         |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 10:37:21
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 10:37:21.160279    3636 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:37:21.160444    3636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:37:21.160449    3636 out.go:304] Setting ErrFile to fd 2...
	I0717 10:37:21.160453    3636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:37:21.160640    3636 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	I0717 10:37:21.162037    3636 out.go:298] Setting JSON to false
	I0717 10:37:21.184380    3636 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2212,"bootTime":1721235629,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0717 10:37:21.184474    3636 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:37:21.206845    3636 out.go:177] * [ha-572000] minikube v1.33.1 on Darwin 14.5
	I0717 10:37:21.250316    3636 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:37:21.250374    3636 notify.go:220] Checking for updates...
	I0717 10:37:21.294243    3636 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:37:21.315083    3636 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 10:37:21.336268    3636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:37:21.357529    3636 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	I0717 10:37:21.379368    3636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:37:21.401138    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:21.401903    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:21.401985    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:21.411459    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51932
	I0717 10:37:21.411825    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:21.412241    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:21.412256    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:21.412501    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:21.412634    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:21.412826    3636 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:37:21.413099    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:21.413120    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:21.421537    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51934
	I0717 10:37:21.421880    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:21.422209    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:21.422224    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:21.422446    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:21.422563    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:21.451265    3636 out.go:177] * Using the hyperkit driver based on existing profile
	I0717 10:37:21.493400    3636 start.go:297] selected driver: hyperkit
	I0717 10:37:21.493425    3636 start.go:901] validating driver "hyperkit" against &{Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclas
s:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:37:21.493682    3636 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:37:21.493865    3636 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:37:21.494086    3636 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19283-1099/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0717 10:37:21.503763    3636 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0717 10:37:21.507648    3636 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:21.507668    3636 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0717 10:37:21.510386    3636 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:37:21.510420    3636 cni.go:84] Creating CNI manager for ""
	I0717 10:37:21.510429    3636 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 10:37:21.510503    3636 start.go:340] cluster config:
	{Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:37:21.510603    3636 iso.go:125] acquiring lock: {Name:mkf51f842bcc8a77e9c7c50d642c4c76848e96af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:37:21.554326    3636 out.go:177] * Starting "ha-572000" primary control-plane node in "ha-572000" cluster
	I0717 10:37:21.575453    3636 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:37:21.575524    3636 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0717 10:37:21.575584    3636 cache.go:56] Caching tarball of preloaded images
	I0717 10:37:21.575806    3636 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:37:21.575825    3636 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:37:21.576014    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:37:21.577007    3636 start.go:360] acquireMachinesLock for ha-572000: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:37:21.577135    3636 start.go:364] duration metric: took 100.667µs to acquireMachinesLock for "ha-572000"
	I0717 10:37:21.577166    3636 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:37:21.577183    3636 fix.go:54] fixHost starting: 
	I0717 10:37:21.577591    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:21.577617    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:21.586612    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51936
	I0717 10:37:21.586997    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:21.587342    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:21.587357    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:21.587563    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:21.587707    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:21.587805    3636 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:37:21.587906    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:21.587984    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3521
	I0717 10:37:21.588936    3636 fix.go:112] recreateIfNeeded on ha-572000: state=Stopped err=<nil>
	I0717 10:37:21.588955    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:21.588954    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 3521 missing from process table
	W0717 10:37:21.589054    3636 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:37:21.631187    3636 out.go:177] * Restarting existing hyperkit VM for "ha-572000" ...
	I0717 10:37:21.652411    3636 main.go:141] libmachine: (ha-572000) Calling .Start
	I0717 10:37:21.652671    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:21.652780    3636 main.go:141] libmachine: (ha-572000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid
	I0717 10:37:21.654451    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 3521 missing from process table
	I0717 10:37:21.654462    3636 main.go:141] libmachine: (ha-572000) DBG | pid 3521 is in state "Stopped"
	I0717 10:37:21.654497    3636 main.go:141] libmachine: (ha-572000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid...
	I0717 10:37:21.654867    3636 main.go:141] libmachine: (ha-572000) DBG | Using UUID 5f2666de-0b32-4258-9840-7856c1bd4173
	I0717 10:37:21.763705    3636 main.go:141] libmachine: (ha-572000) DBG | Generated MAC d2:a6:10:ad:80:98
	I0717 10:37:21.763739    3636 main.go:141] libmachine: (ha-572000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:37:21.763844    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f2666de-0b32-4258-9840-7856c1bd4173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8a80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:37:21.763875    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f2666de-0b32-4258-9840-7856c1bd4173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8a80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:37:21.763912    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5f2666de-0b32-4258-9840-7856c1bd4173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/ha-572000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:37:21.763957    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5f2666de-0b32-4258-9840-7856c1bd4173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/ha-572000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:37:21.763980    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:37:21.765595    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 DEBUG: hyperkit: Pid is 3650
	I0717 10:37:21.766010    3636 main.go:141] libmachine: (ha-572000) DBG | Attempt 0
	I0717 10:37:21.766020    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:21.766092    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3650
	I0717 10:37:21.767880    3636 main.go:141] libmachine: (ha-572000) DBG | Searching for d2:a6:10:ad:80:98 in /var/db/dhcpd_leases ...
	I0717 10:37:21.767940    3636 main.go:141] libmachine: (ha-572000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:37:21.767961    3636 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x669951d0}
	I0717 10:37:21.767972    3636 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669951be}
	I0717 10:37:21.767977    3636 main.go:141] libmachine: (ha-572000) DBG | Found match: d2:a6:10:ad:80:98
	I0717 10:37:21.767984    3636 main.go:141] libmachine: (ha-572000) DBG | IP: 192.169.0.5
	I0717 10:37:21.768041    3636 main.go:141] libmachine: (ha-572000) Calling .GetConfigRaw
	I0717 10:37:21.768653    3636 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:37:21.768835    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:37:21.769276    3636 machine.go:94] provisionDockerMachine start ...
	I0717 10:37:21.769288    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:21.769440    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:21.769559    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:21.769675    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:21.769782    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:21.769886    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:21.770036    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:21.770285    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:21.770298    3636 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:37:21.773346    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:37:21.825199    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:37:21.825892    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:37:21.825902    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:37:21.825909    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:37:21.825917    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:37:22.200252    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:37:22.200268    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:37:22.314927    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:37:22.314948    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:37:22.314982    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:37:22.314999    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:37:22.315852    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:37:22.315864    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:37:27.580528    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:27 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:37:27.580565    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:27 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:37:27.580573    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:27 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:37:27.604198    3636 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:37:27 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:37:32.830003    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:37:32.830021    3636 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:37:32.830158    3636 buildroot.go:166] provisioning hostname "ha-572000"
	I0717 10:37:32.830170    3636 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:37:32.830268    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:32.830359    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:32.830451    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:32.830548    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:32.830646    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:32.830800    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:32.830958    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:32.830967    3636 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000 && echo "ha-572000" | sudo tee /etc/hostname
	I0717 10:37:32.892396    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000
	
	I0717 10:37:32.892414    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:32.892535    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:32.892617    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:32.892697    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:32.892768    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:32.892926    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:32.893069    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:32.893080    3636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:37:32.952066    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:37:32.952086    3636 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:37:32.952098    3636 buildroot.go:174] setting up certificates
	I0717 10:37:32.952109    3636 provision.go:84] configureAuth start
	I0717 10:37:32.952116    3636 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
	I0717 10:37:32.952255    3636 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:37:32.952365    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:32.952464    3636 provision.go:143] copyHostCerts
	I0717 10:37:32.952503    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:37:32.952585    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:37:32.952594    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:37:32.952749    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:37:32.952965    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:37:32.953012    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:37:32.953018    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:37:32.953117    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:37:32.953281    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:37:32.953328    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:37:32.953333    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:37:32.953420    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:37:32.953574    3636 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000 san=[127.0.0.1 192.169.0.5 ha-572000 localhost minikube]
	I0717 10:37:33.013099    3636 provision.go:177] copyRemoteCerts
	I0717 10:37:33.013145    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:37:33.013161    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:33.013272    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:33.013371    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.013543    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:33.013682    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:33.045521    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:37:33.045593    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:37:33.064633    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:37:33.064699    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0717 10:37:33.084163    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:37:33.084229    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:37:33.103388    3636 provision.go:87] duration metric: took 151.262739ms to configureAuth
	I0717 10:37:33.103401    3636 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:37:33.103573    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:33.103587    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:33.103711    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:33.103809    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:33.103896    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.103977    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.104077    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:33.104181    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:33.104316    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:33.104324    3636 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:37:33.156434    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:37:33.156448    3636 buildroot.go:70] root file system type: tmpfs
	I0717 10:37:33.156525    3636 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:37:33.156537    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:33.156662    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:33.156743    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.156842    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.156931    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:33.157047    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:33.157186    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:33.157233    3636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:37:33.218680    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:37:33.218702    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:33.218866    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:33.218955    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.219056    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:33.219143    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:33.219283    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:33.219430    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:33.219443    3636 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:37:34.829521    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:37:34.829537    3636 machine.go:97] duration metric: took 13.059920588s to provisionDockerMachine
	I0717 10:37:34.829550    3636 start.go:293] postStartSetup for "ha-572000" (driver="hyperkit")
	I0717 10:37:34.829558    3636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:37:34.829569    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:34.829747    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:37:34.829763    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:34.829864    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:34.829977    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:34.830076    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:34.830154    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:34.863781    3636 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:37:34.867753    3636 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:37:34.867768    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:37:34.867875    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:37:34.868074    3636 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:37:34.868081    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:37:34.868294    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:37:34.881801    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:37:34.912172    3636 start.go:296] duration metric: took 82.609841ms for postStartSetup
	I0717 10:37:34.912193    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:34.912376    3636 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:37:34.912397    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:34.912490    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:34.912588    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:34.912689    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:34.912778    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:34.946140    3636 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:37:34.946199    3636 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:37:34.999470    3636 fix.go:56] duration metric: took 13.421948957s for fixHost
	I0717 10:37:34.999494    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:34.999648    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:34.999748    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:34.999850    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:34.999944    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:35.000069    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:35.000221    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0717 10:37:35.000229    3636 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:37:35.051085    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237854.922867132
	
	I0717 10:37:35.051099    3636 fix.go:216] guest clock: 1721237854.922867132
	I0717 10:37:35.051112    3636 fix.go:229] Guest: 2024-07-17 10:37:34.922867132 -0700 PDT Remote: 2024-07-17 10:37:34.999482 -0700 PDT m=+13.873438456 (delta=-76.614868ms)
	I0717 10:37:35.051130    3636 fix.go:200] guest clock delta is within tolerance: -76.614868ms
	I0717 10:37:35.051134    3636 start.go:83] releasing machines lock for "ha-572000", held for 13.473647062s
	I0717 10:37:35.051154    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:35.051301    3636 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:37:35.051418    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:35.051739    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:35.051853    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:35.051967    3636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:37:35.051989    3636 ssh_runner.go:195] Run: cat /version.json
	I0717 10:37:35.051998    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:35.052000    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:35.052101    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:35.052120    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:35.052207    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:35.052223    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:35.052289    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:35.052308    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:35.052381    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:35.052403    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:35.080899    3636 ssh_runner.go:195] Run: systemctl --version
	I0717 10:37:35.132487    3636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 10:37:35.137302    3636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:37:35.137349    3636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:37:35.150408    3636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:37:35.150420    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:37:35.150523    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:37:35.166824    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:37:35.175726    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:37:35.184531    3636 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:37:35.184576    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:37:35.193352    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:37:35.202047    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:37:35.210925    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:37:35.219775    3636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:37:35.228824    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:37:35.237746    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:37:35.246520    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:37:35.255409    3636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:37:35.263547    3636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:37:35.271637    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:35.370819    3636 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:37:35.385762    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:37:35.385839    3636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:37:35.397460    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:37:35.408605    3636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:37:35.423025    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:37:35.433954    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:37:35.444983    3636 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:37:35.462789    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:37:35.474320    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:37:35.491905    3636 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:37:35.494848    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:37:35.502963    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:37:35.516602    3636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:37:35.626759    3636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:37:35.732422    3636 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:37:35.732511    3636 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:37:35.746415    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:35.837452    3636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:37:38.134243    3636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.296714656s)
	I0717 10:37:38.134309    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:37:38.145497    3636 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 10:37:38.159451    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:37:38.170560    3636 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:37:38.274400    3636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:37:38.385610    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:38.490247    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:37:38.502358    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:37:38.513179    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:38.610828    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:37:38.675050    3636 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:37:38.675129    3636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:37:38.679555    3636 start.go:563] Will wait 60s for crictl version
	I0717 10:37:38.679605    3636 ssh_runner.go:195] Run: which crictl
	I0717 10:37:38.682545    3636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:37:38.707789    3636 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:37:38.707873    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:37:38.724822    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:37:38.769236    3636 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:37:38.769287    3636 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:37:38.769657    3636 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:37:38.774296    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:37:38.784075    3636 kubeadm.go:883] updating cluster {Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 10:37:38.784175    3636 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:37:38.784231    3636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 10:37:38.798317    3636 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0717 10:37:38.798329    3636 docker.go:615] Images already preloaded, skipping extraction
	I0717 10:37:38.798398    3636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 10:37:38.810938    3636 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0717 10:37:38.810957    3636 cache_images.go:84] Images are preloaded, skipping loading
	I0717 10:37:38.810966    3636 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.30.2 docker true true} ...
	I0717 10:37:38.811048    3636 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:37:38.811115    3636 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 10:37:38.829256    3636 cni.go:84] Creating CNI manager for ""
	I0717 10:37:38.829269    3636 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 10:37:38.829280    3636 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 10:37:38.829295    3636 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-572000 NodeName:ha-572000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 10:37:38.829373    3636 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-572000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 10:37:38.829387    3636 kube-vip.go:115] generating kube-vip config ...
	I0717 10:37:38.829437    3636 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 10:37:38.842048    3636 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 10:37:38.842112    3636 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 10:37:38.842157    3636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:37:38.849945    3636 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:37:38.849994    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 10:37:38.857243    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0717 10:37:38.870596    3636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:37:38.883936    3636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0717 10:37:38.897367    3636 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0717 10:37:38.910809    3636 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0717 10:37:38.913705    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:37:38.922873    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:39.030583    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:37:39.043433    3636 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.5
	I0717 10:37:39.043445    3636 certs.go:194] generating shared ca certs ...
	I0717 10:37:39.043456    3636 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:37:39.043642    3636 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:37:39.043720    3636 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:37:39.043730    3636 certs.go:256] generating profile certs ...
	I0717 10:37:39.043839    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key
	I0717 10:37:39.043918    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7
	I0717 10:37:39.043992    3636 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key
	I0717 10:37:39.043999    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:37:39.044021    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:37:39.044039    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:37:39.044057    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:37:39.044074    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 10:37:39.044104    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 10:37:39.044133    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 10:37:39.044152    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 10:37:39.044248    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:37:39.044296    3636 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:37:39.044310    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:37:39.044353    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:37:39.044397    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:37:39.044448    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:37:39.044541    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:37:39.044586    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:39.044607    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:37:39.044626    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:37:39.045107    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:37:39.076893    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:37:39.102499    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:37:39.129749    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:37:39.155627    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 10:37:39.180179    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 10:37:39.210181    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 10:37:39.264808    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 10:37:39.318806    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:37:39.365954    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:37:39.390620    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:37:39.410051    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 10:37:39.423408    3636 ssh_runner.go:195] Run: openssl version
	I0717 10:37:39.427605    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:37:39.436575    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:39.439804    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:39.439837    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:39.443971    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:37:39.452794    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:37:39.461667    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:37:39.464961    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:37:39.465002    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:37:39.469065    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:37:39.477903    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:37:39.486816    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:37:39.490121    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:37:39.490162    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:37:39.494244    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:37:39.503378    3636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:37:39.506714    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 10:37:39.510953    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 10:37:39.515092    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 10:37:39.519272    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 10:37:39.523407    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 10:37:39.527554    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 10:37:39.531780    3636 kubeadm.go:392] StartCluster: {Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:37:39.531904    3636 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 10:37:39.544965    3636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 10:37:39.553126    3636 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 10:37:39.553138    3636 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 10:37:39.553178    3636 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 10:37:39.561206    3636 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 10:37:39.561518    3636 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-572000" does not appear in /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:37:39.561607    3636 kubeconfig.go:62] /Users/jenkins/minikube-integration/19283-1099/kubeconfig needs updating (will repair): [kubeconfig missing "ha-572000" cluster setting kubeconfig missing "ha-572000" context setting]
	I0717 10:37:39.561822    3636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:37:39.562469    3636 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:37:39.562674    3636 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa0a8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 10:37:39.562998    3636 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 10:37:39.563178    3636 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 10:37:39.570855    3636 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0717 10:37:39.570867    3636 kubeadm.go:597] duration metric: took 17.724744ms to restartPrimaryControlPlane
	I0717 10:37:39.570878    3636 kubeadm.go:394] duration metric: took 39.101036ms to StartCluster
	I0717 10:37:39.570889    3636 settings.go:142] acquiring lock: {Name:mkc45f011a907c66e2dbca7dadfff37ab48f7d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:37:39.570961    3636 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:37:39.571333    3636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:37:39.571564    3636 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:37:39.571579    3636 start.go:241] waiting for startup goroutines ...
	I0717 10:37:39.571583    3636 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 10:37:39.571709    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:39.622273    3636 out.go:177] * Enabled addons: 
	I0717 10:37:39.644517    3636 addons.go:510] duration metric: took 72.937257ms for enable addons: enabled=[]
	I0717 10:37:39.644554    3636 start.go:246] waiting for cluster config update ...
	I0717 10:37:39.644589    3636 start.go:255] writing updated cluster config ...
	I0717 10:37:39.667630    3636 out.go:177] 
	I0717 10:37:39.689827    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:39.689958    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:37:39.712261    3636 out.go:177] * Starting "ha-572000-m02" control-plane node in "ha-572000" cluster
	I0717 10:37:39.754151    3636 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:37:39.754211    3636 cache.go:56] Caching tarball of preloaded images
	I0717 10:37:39.754408    3636 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:37:39.754427    3636 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:37:39.754564    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:37:39.755532    3636 start.go:360] acquireMachinesLock for ha-572000-m02: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:37:39.755656    3636 start.go:364] duration metric: took 98.999µs to acquireMachinesLock for "ha-572000-m02"
	I0717 10:37:39.755680    3636 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:37:39.755687    3636 fix.go:54] fixHost starting: m02
	I0717 10:37:39.756121    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:39.756167    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:39.765321    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51958
	I0717 10:37:39.765669    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:39.765987    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:39.765996    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:39.766231    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:39.766367    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:39.766465    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetState
	I0717 10:37:39.766561    3636 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:39.766639    3636 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3526
	I0717 10:37:39.767558    3636 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3526 missing from process table
	I0717 10:37:39.767584    3636 fix.go:112] recreateIfNeeded on ha-572000-m02: state=Stopped err=<nil>
	I0717 10:37:39.767592    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	W0717 10:37:39.767681    3636 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:37:39.811253    3636 out.go:177] * Restarting existing hyperkit VM for "ha-572000-m02" ...
	I0717 10:37:39.832179    3636 main.go:141] libmachine: (ha-572000-m02) Calling .Start
	I0717 10:37:39.832337    3636 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:39.832362    3636 main.go:141] libmachine: (ha-572000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid
	I0717 10:37:39.833334    3636 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3526 missing from process table
	I0717 10:37:39.833343    3636 main.go:141] libmachine: (ha-572000-m02) DBG | pid 3526 is in state "Stopped"
	I0717 10:37:39.833355    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid...
	I0717 10:37:39.833536    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Using UUID b5da5881-83da-4916-aec8-9a96c30c8c05
	I0717 10:37:39.859749    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Generated MAC 2:60:33:0:68:8b
	I0717 10:37:39.859777    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:37:39.859978    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b5da5881-83da-4916-aec8-9a96c30c8c05", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c0ae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:37:39.860020    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b5da5881-83da-4916-aec8-9a96c30c8c05", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c0ae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:37:39.860096    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b5da5881-83da-4916-aec8-9a96c30c8c05", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/ha-572000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machine
s/ha-572000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:37:39.860169    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b5da5881-83da-4916-aec8-9a96c30c8c05 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/ha-572000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:37:39.860189    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:37:39.861788    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 DEBUG: hyperkit: Pid is 3657
	I0717 10:37:39.862251    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Attempt 0
	I0717 10:37:39.862268    3636 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:39.862355    3636 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3657
	I0717 10:37:39.864079    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Searching for 2:60:33:0:68:8b in /var/db/dhcpd_leases ...
	I0717 10:37:39.864121    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:37:39.864142    3636 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669952da}
	I0717 10:37:39.864158    3636 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x669951d0}
	I0717 10:37:39.864182    3636 main.go:141] libmachine: (ha-572000-m02) DBG | Found match: 2:60:33:0:68:8b
	I0717 10:37:39.864197    3636 main.go:141] libmachine: (ha-572000-m02) DBG | IP: 192.169.0.6
	I0717 10:37:39.864229    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetConfigRaw
	I0717 10:37:39.865013    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:37:39.865242    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:37:39.865841    3636 machine.go:94] provisionDockerMachine start ...
	I0717 10:37:39.865853    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:39.866023    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:39.866160    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:39.866271    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:39.866402    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:39.866505    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:39.866622    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:39.866842    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:39.866854    3636 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:37:39.869683    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:37:39.878483    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:37:39.879603    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:37:39.879617    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:37:39.879624    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:37:39.879629    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:37:40.255889    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:37:40.255907    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:37:40.370491    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:37:40.370510    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:37:40.370520    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:37:40.370527    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:37:40.371371    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:37:40.371379    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:37:45.614184    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:45 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:37:45.614198    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:45 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:37:45.614209    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:45 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:37:45.638128    3636 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:37:45 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:37:50.925250    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:37:50.925264    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:37:50.925388    3636 buildroot.go:166] provisioning hostname "ha-572000-m02"
	I0717 10:37:50.925396    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:37:50.925487    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:50.925569    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:50.925664    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:50.925753    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:50.925857    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:50.925992    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:50.926145    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:50.926154    3636 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000-m02 && echo "ha-572000-m02" | sudo tee /etc/hostname
	I0717 10:37:50.991059    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000-m02
	
	I0717 10:37:50.991079    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:50.991219    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:50.991316    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:50.991401    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:50.991492    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:50.991638    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:50.991791    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:50.991803    3636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:37:51.051090    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:37:51.051108    3636 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:37:51.051119    3636 buildroot.go:174] setting up certificates
	I0717 10:37:51.051126    3636 provision.go:84] configureAuth start
	I0717 10:37:51.051132    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
	I0717 10:37:51.051276    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:37:51.051370    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:51.051458    3636 provision.go:143] copyHostCerts
	I0717 10:37:51.051492    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:37:51.051538    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:37:51.051544    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:37:51.051674    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:37:51.051883    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:37:51.051914    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:37:51.051919    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:37:51.052017    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:37:51.052173    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:37:51.052202    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:37:51.052207    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:37:51.052377    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:37:51.052529    3636 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000-m02 san=[127.0.0.1 192.169.0.6 ha-572000-m02 localhost minikube]
	I0717 10:37:51.118183    3636 provision.go:177] copyRemoteCerts
	I0717 10:37:51.118227    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:37:51.118240    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:51.118378    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:51.118485    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.118583    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:51.118673    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:37:51.152061    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:37:51.152130    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:37:51.171745    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:37:51.171819    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 10:37:51.192673    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:37:51.192744    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:37:51.212788    3636 provision.go:87] duration metric: took 161.649391ms to configureAuth
	I0717 10:37:51.212802    3636 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:37:51.212965    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:51.212978    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:51.213112    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:51.213224    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:51.213316    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.213411    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.213499    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:51.213614    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:51.213748    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:51.213755    3636 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:37:51.269367    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:37:51.269384    3636 buildroot.go:70] root file system type: tmpfs
	I0717 10:37:51.269468    3636 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:37:51.269484    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:51.269663    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:51.269800    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.269888    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.269973    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:51.270120    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:51.270267    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:51.270313    3636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:37:51.334311    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:37:51.334330    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:51.334460    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:51.334550    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.334644    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:51.334739    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:51.334864    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:51.335013    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:51.335026    3636 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:37:52.973251    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:37:52.973265    3636 machine.go:97] duration metric: took 13.107082478s to provisionDockerMachine
	I0717 10:37:52.973273    3636 start.go:293] postStartSetup for "ha-572000-m02" (driver="hyperkit")
	I0717 10:37:52.973280    3636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:37:52.973291    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:52.973486    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:37:52.973497    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:52.973604    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:52.973699    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:52.973791    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:52.973882    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:37:53.016888    3636 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:37:53.020683    3636 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:37:53.020693    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:37:53.020793    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:37:53.020968    3636 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:37:53.020974    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:37:53.021167    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:37:53.029813    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:37:53.057224    3636 start.go:296] duration metric: took 83.939886ms for postStartSetup
	I0717 10:37:53.057245    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:53.057420    3636 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:37:53.057442    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:53.057549    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:53.057634    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:53.057729    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:53.057811    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:37:53.091296    3636 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:37:53.091355    3636 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:37:53.145297    3636 fix.go:56] duration metric: took 13.389268028s for fixHost
	I0717 10:37:53.145323    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:53.145457    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:53.145570    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:53.145662    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:53.145747    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:53.145888    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:37:53.146033    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0717 10:37:53.146041    3636 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:37:53.200266    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237873.035451058
	
	I0717 10:37:53.200279    3636 fix.go:216] guest clock: 1721237873.035451058
	I0717 10:37:53.200284    3636 fix.go:229] Guest: 2024-07-17 10:37:53.035451058 -0700 PDT Remote: 2024-07-17 10:37:53.145313 -0700 PDT m=+32.018809214 (delta=-109.861942ms)
	I0717 10:37:53.200294    3636 fix.go:200] guest clock delta is within tolerance: -109.861942ms
	I0717 10:37:53.200298    3636 start.go:83] releasing machines lock for "ha-572000-m02", held for 13.44429115s
	I0717 10:37:53.200315    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:53.200436    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:37:53.222208    3636 out.go:177] * Found network options:
	I0717 10:37:53.243791    3636 out.go:177]   - NO_PROXY=192.169.0.5
	W0717 10:37:53.264601    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:37:53.264624    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:53.265081    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:53.265198    3636 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
	I0717 10:37:53.265269    3636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:37:53.265297    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	W0717 10:37:53.265332    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:37:53.265384    3636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:37:53.265387    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:53.265394    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
	I0717 10:37:53.265518    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:53.265536    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
	I0717 10:37:53.265639    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:53.265670    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
	I0717 10:37:53.265728    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	I0717 10:37:53.265789    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
	I0717 10:37:53.265871    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
	W0717 10:37:53.294993    3636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:37:53.295059    3636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:37:53.339897    3636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:37:53.339919    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:37:53.340039    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:37:53.356231    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:37:53.365203    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:37:53.374127    3636 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:37:53.374184    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:37:53.382910    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:37:53.391778    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:37:53.400635    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:37:53.409795    3636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:37:53.418780    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:37:53.427594    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:37:53.436364    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:37:53.445437    3636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:37:53.453621    3636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:37:53.461634    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:53.558529    3636 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:37:53.577286    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:37:53.577360    3636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:37:53.591736    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:37:53.603521    3636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:37:53.618503    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:37:53.629064    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:37:53.639359    3636 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:37:53.658160    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:37:53.668814    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:37:53.683643    3636 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:37:53.686618    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:37:53.693926    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:37:53.707525    3636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:37:53.805691    3636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:37:53.920383    3636 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:37:53.920404    3636 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:37:53.934506    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:54.030259    3636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:37:56.344867    3636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.314525686s)
	I0717 10:37:56.344926    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:37:56.355390    3636 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 10:37:56.369820    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:37:56.380473    3636 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:37:56.479810    3636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:37:56.576860    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:56.671071    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:37:56.685037    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:37:56.696333    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:56.796692    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:37:56.861896    3636 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:37:56.861969    3636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:37:56.866672    3636 start.go:563] Will wait 60s for crictl version
	I0717 10:37:56.866724    3636 ssh_runner.go:195] Run: which crictl
	I0717 10:37:56.869877    3636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:37:56.896141    3636 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:37:56.896217    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:37:56.915592    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:37:56.953839    3636 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:37:56.975427    3636 out.go:177]   - env NO_PROXY=192.169.0.5
	I0717 10:37:56.996201    3636 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
	I0717 10:37:56.996608    3636 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:37:57.001171    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:37:57.011676    3636 mustload.go:65] Loading cluster: ha-572000
	I0717 10:37:57.011852    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:57.012113    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:57.012134    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:57.020969    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51980
	I0717 10:37:57.021367    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:57.021710    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:57.021724    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:57.021923    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:57.022051    3636 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:37:57.022138    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:37:57.022223    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3650
	I0717 10:37:57.023174    3636 host.go:66] Checking if "ha-572000" exists ...
	I0717 10:37:57.023426    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:37:57.023448    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:37:57.032019    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51982
	I0717 10:37:57.032378    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:37:57.032733    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:37:57.032749    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:37:57.032974    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:37:57.033082    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:37:57.033182    3636 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.6
	I0717 10:37:57.033189    3636 certs.go:194] generating shared ca certs ...
	I0717 10:37:57.033198    3636 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:37:57.033338    3636 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:37:57.033394    3636 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:37:57.033402    3636 certs.go:256] generating profile certs ...
	I0717 10:37:57.033489    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key
	I0717 10:37:57.033573    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.060f3240
	I0717 10:37:57.033624    3636 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key
	I0717 10:37:57.033631    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:37:57.033652    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:37:57.033672    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:37:57.033691    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:37:57.033708    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 10:37:57.033726    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 10:37:57.033744    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 10:37:57.033762    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 10:37:57.033840    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:37:57.033893    3636 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:37:57.033902    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:37:57.033938    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:37:57.033978    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:37:57.034008    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:37:57.034074    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:37:57.034108    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:37:57.034128    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:37:57.034146    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:57.034178    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:37:57.034270    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:37:57.034368    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:37:57.034458    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:37:57.034541    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:37:57.060171    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 10:37:57.063698    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 10:37:57.072274    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 10:37:57.075754    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0717 10:37:57.084043    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 10:37:57.087057    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 10:37:57.095232    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 10:37:57.098576    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0717 10:37:57.107451    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 10:37:57.110444    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 10:37:57.118613    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 10:37:57.121532    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 10:37:57.130217    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:37:57.149961    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:37:57.168914    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:37:57.188002    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:37:57.207206    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 10:37:57.226812    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 10:37:57.246070    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 10:37:57.265450    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 10:37:57.284420    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:37:57.303511    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:37:57.322687    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:37:57.341613    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 10:37:57.355190    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0717 10:37:57.368847    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 10:37:57.382513    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0717 10:37:57.395989    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 10:37:57.409357    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 10:37:57.423052    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 10:37:57.436932    3636 ssh_runner.go:195] Run: openssl version
	I0717 10:37:57.441057    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:37:57.450112    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:37:57.453386    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:37:57.453428    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:37:57.457514    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:37:57.466394    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:37:57.475362    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:37:57.478777    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:37:57.478819    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:37:57.482919    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:37:57.491931    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:37:57.500785    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:57.504034    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:57.504067    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:37:57.508244    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:37:57.517376    3636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:37:57.520713    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 10:37:57.524959    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 10:37:57.529259    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 10:37:57.533468    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 10:37:57.537834    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 10:37:57.542026    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 10:37:57.546248    3636 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.30.2 docker true true} ...
	I0717 10:37:57.546318    3636 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:37:57.546337    3636 kube-vip.go:115] generating kube-vip config ...
	I0717 10:37:57.546371    3636 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 10:37:57.559423    3636 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 10:37:57.559466    3636 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 10:37:57.559520    3636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:37:57.567774    3636 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:37:57.567817    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 10:37:57.575763    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0717 10:37:57.589137    3636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:37:57.602430    3636 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0717 10:37:57.616134    3636 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0717 10:37:57.619036    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:37:57.629004    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:57.726717    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:37:57.741206    3636 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:37:57.741389    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:37:57.762661    3636 out.go:177] * Verifying Kubernetes components...
	I0717 10:37:57.804314    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:37:57.930654    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:37:57.959022    3636 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:37:57.959251    3636 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa0a8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 10:37:57.959292    3636 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0717 10:37:57.959472    3636 node_ready.go:35] waiting up to 6m0s for node "ha-572000-m02" to be "Ready" ...
	I0717 10:37:57.959551    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:37:57.959557    3636 round_trippers.go:469] Request Headers:
	I0717 10:37:57.959564    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:37:57.959567    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.587526    3636 round_trippers.go:574] Response Status: 200 OK in 8627 milliseconds
	I0717 10:38:06.588080    3636 node_ready.go:49] node "ha-572000-m02" has status "Ready":"True"
	I0717 10:38:06.588093    3636 node_ready.go:38] duration metric: took 8.628386286s for node "ha-572000-m02" to be "Ready" ...
	I0717 10:38:06.588101    3636 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:38:06.588149    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:06.588155    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.588161    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.588168    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.624239    3636 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I0717 10:38:06.633134    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.633193    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:06.633198    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.633204    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.633210    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.642331    3636 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 10:38:06.642741    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:06.642749    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.642756    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.642759    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.645958    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:06.646753    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:06.646763    3636 pod_ready.go:81] duration metric: took 13.611341ms for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.646771    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.646808    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9dzd5
	I0717 10:38:06.646813    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.646818    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.646822    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.650165    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:06.650520    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:06.650527    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.650533    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.650538    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.652506    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:06.652830    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:06.652839    3636 pod_ready.go:81] duration metric: took 6.063342ms for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.652846    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.652883    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000
	I0717 10:38:06.652888    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.652894    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.652897    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.688343    3636 round_trippers.go:574] Response Status: 200 OK in 35 milliseconds
	I0717 10:38:06.688830    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:06.688842    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.688852    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.688855    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.691433    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:06.691756    3636 pod_ready.go:92] pod "etcd-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:06.691766    3636 pod_ready.go:81] duration metric: took 38.913354ms for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.691776    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.691822    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m02
	I0717 10:38:06.691828    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.691835    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.691841    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.722915    3636 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0717 10:38:06.723291    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:06.723298    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.723304    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.723309    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.762595    3636 round_trippers.go:574] Response Status: 200 OK in 39 milliseconds
	I0717 10:38:06.763038    3636 pod_ready.go:92] pod "etcd-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:06.763050    3636 pod_ready.go:81] duration metric: took 71.265447ms for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.763057    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.763098    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m03
	I0717 10:38:06.763103    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.763109    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.763112    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.766379    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:06.788728    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:06.788744    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.788750    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.788754    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.790975    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:06.791292    3636 pod_ready.go:92] pod "etcd-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:06.791302    3636 pod_ready.go:81] duration metric: took 28.239348ms for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.791319    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:06.988792    3636 request.go:629] Waited for 197.413405ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:38:06.988885    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:38:06.988891    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:06.988897    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:06.988903    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:06.991048    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:07.189095    3636 request.go:629] Waited for 197.524443ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:07.189132    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:07.189138    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:07.189146    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:07.189196    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:07.191472    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:07.191816    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:07.191825    3636 pod_ready.go:81] duration metric: took 400.490534ms for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:07.191832    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:07.388673    3636 request.go:629] Waited for 196.768491ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:38:07.388712    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:38:07.388717    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:07.388723    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:07.388726    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:07.390742    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:07.589477    3636 request.go:629] Waited for 198.180735ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:07.589514    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:07.589519    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:07.589526    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:07.589532    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:07.593904    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:38:07.594274    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:07.594283    3636 pod_ready.go:81] duration metric: took 402.436695ms for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:07.594290    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:07.789046    3636 request.go:629] Waited for 194.715768ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:38:07.789116    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:38:07.789122    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:07.789128    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:07.789134    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:07.791498    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:07.988262    3636 request.go:629] Waited for 196.319765ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:07.988331    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:07.988337    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:07.988344    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:07.988349    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:07.990665    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:07.990933    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:07.990943    3636 pod_ready.go:81] duration metric: took 396.637435ms for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:07.990949    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:08.189888    3636 request.go:629] Waited for 198.896315ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:38:08.189961    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:38:08.189968    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:08.189977    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:08.189982    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:08.192640    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:08.388942    3636 request.go:629] Waited for 195.85351ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:08.388998    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:08.389006    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:08.389019    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:08.389035    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:08.392574    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:08.392939    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:08.392951    3636 pod_ready.go:81] duration metric: took 401.985681ms for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:08.392963    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:08.589323    3636 request.go:629] Waited for 196.303012ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:38:08.589449    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:38:08.589461    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:08.589473    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:08.589481    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:08.592867    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:08.788589    3636 request.go:629] Waited for 195.011915ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:08.788634    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:08.788643    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:08.788654    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:08.788663    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:08.791468    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:08.791995    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:08.792019    3636 pod_ready.go:81] duration metric: took 399.039947ms for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:08.792032    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:08.990174    3636 request.go:629] Waited for 198.086662ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:38:08.990290    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:38:08.990300    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:08.990310    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:08.990317    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:08.993459    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:09.189555    3636 request.go:629] Waited for 195.556708ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:09.189686    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:09.189699    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:09.189710    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:09.189717    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:09.193157    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:09.193504    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:09.193518    3636 pod_ready.go:81] duration metric: took 401.469313ms for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:09.193543    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:09.389705    3636 request.go:629] Waited for 196.104363ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:38:09.389843    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:38:09.389855    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:09.389866    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:09.389872    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:09.393695    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:09.588443    3636 request.go:629] Waited for 194.213728ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:38:09.588571    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:38:09.588582    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:09.588591    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:09.588614    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:09.591794    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:09.592120    3636 pod_ready.go:92] pod "kube-proxy-5wcph" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:09.592130    3636 pod_ready.go:81] duration metric: took 398.566071ms for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:09.592136    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:09.789810    3636 request.go:629] Waited for 197.599858ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:38:09.789932    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:38:09.789953    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:09.789967    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:09.789977    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:09.793548    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:09.990128    3636 request.go:629] Waited for 195.990226ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:09.990259    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:09.990271    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:09.990282    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:09.990289    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:09.994401    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:38:09.995074    3636 pod_ready.go:92] pod "kube-proxy-h7k9z" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:09.995084    3636 pod_ready.go:81] duration metric: took 402.932164ms for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:09.995091    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:10.188412    3636 request.go:629] Waited for 193.228723ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:38:10.188460    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:38:10.188468    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:10.188479    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:10.188487    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:10.192053    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:10.389379    3636 request.go:629] Waited for 196.635202ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:10.389554    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:10.389574    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:10.389589    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:10.389599    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:10.393541    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:10.393889    3636 pod_ready.go:92] pod "kube-proxy-hst7h" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:10.393900    3636 pod_ready.go:81] duration metric: took 398.793558ms for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:10.393912    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:10.589752    3636 request.go:629] Waited for 195.757616ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:38:10.589810    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:38:10.589821    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:10.589833    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:10.589842    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:10.593161    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:10.789574    3636 request.go:629] Waited for 195.972483ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:10.789649    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:10.789655    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:10.789661    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:10.789665    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:10.792056    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:10.792456    3636 pod_ready.go:92] pod "kube-proxy-v6jxh" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:10.792465    3636 pod_ready.go:81] duration metric: took 398.537807ms for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:10.792472    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:10.990155    3636 request.go:629] Waited for 197.636631ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:38:10.990304    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:38:10.990316    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:10.990327    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:10.990333    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:10.993508    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:11.188937    3636 request.go:629] Waited for 194.57393ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:11.188967    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:11.188973    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:11.188979    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:11.188983    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:11.190738    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:11.191134    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:11.191144    3636 pod_ready.go:81] duration metric: took 398.656979ms for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:11.191150    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:11.388866    3636 request.go:629] Waited for 197.675969ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:38:11.388927    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:38:11.388931    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:11.388937    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:11.388941    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:11.390887    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:11.589661    3636 request.go:629] Waited for 198.35169ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:11.589745    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:38:11.589751    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:11.589759    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:11.589764    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:11.591880    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:11.592231    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:11.592240    3636 pod_ready.go:81] duration metric: took 401.075331ms for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:11.592247    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:11.790368    3636 request.go:629] Waited for 198.069219ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:38:11.790468    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:38:11.790479    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:11.790491    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:11.790498    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:11.793691    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:11.988391    3636 request.go:629] Waited for 194.130009ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:11.988514    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:11.988524    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:11.988535    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:11.988543    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:11.991587    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:11.991946    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:38:11.991960    3636 pod_ready.go:81] duration metric: took 399.692083ms for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:11.991969    3636 pod_ready.go:38] duration metric: took 5.403719656s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:38:11.991988    3636 api_server.go:52] waiting for apiserver process to appear ...
	I0717 10:38:11.992040    3636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:38:12.003855    3636 api_server.go:72] duration metric: took 14.26226374s to wait for apiserver process to appear ...
	I0717 10:38:12.003867    3636 api_server.go:88] waiting for apiserver healthz status ...
	I0717 10:38:12.003882    3636 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0717 10:38:12.008423    3636 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0717 10:38:12.008465    3636 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0717 10:38:12.008471    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:12.008478    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:12.008481    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:12.009101    3636 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:38:12.009162    3636 api_server.go:141] control plane version: v1.30.2
	I0717 10:38:12.009171    3636 api_server.go:131] duration metric: took 5.299116ms to wait for apiserver health ...
	I0717 10:38:12.009178    3636 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 10:38:12.189013    3636 request.go:629] Waited for 179.768156ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:12.189094    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:12.189102    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:12.189111    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:12.189116    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:12.194083    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:38:12.199463    3636 system_pods.go:59] 26 kube-system pods found
	I0717 10:38:12.199478    3636 system_pods.go:61] "coredns-7db6d8ff4d-2phrp" [e76a91cf-8a14-454c-baee-7c0128645e0e] Running
	I0717 10:38:12.199495    3636 system_pods.go:61] "coredns-7db6d8ff4d-9dzd5" [f9a7cff1-56a3-4600-ae2f-a951dda10753] Running
	I0717 10:38:12.199501    3636 system_pods.go:61] "etcd-ha-572000" [2d7b717e-d404-4c63-afe9-799de6711964] Running
	I0717 10:38:12.199505    3636 system_pods.go:61] "etcd-ha-572000-m02" [8abc7662-c159-4953-9aba-11a75b4e7d65] Running
	I0717 10:38:12.199509    3636 system_pods.go:61] "etcd-ha-572000-m03" [78629908-d362-4a27-933f-2b929867c22f] Running
	I0717 10:38:12.199518    3636 system_pods.go:61] "kindnet-5xsrp" [0b84aa4d-7452-47f6-8052-f063e2e8ef7b] Running
	I0717 10:38:12.199521    3636 system_pods.go:61] "kindnet-72zfp" [0cc735c4-08b3-499a-b9e2-8c38377f371b] Running
	I0717 10:38:12.199524    3636 system_pods.go:61] "kindnet-g2m92" [be3d84cf-f9e8-426c-b02a-49eb7eed9d6a] Running
	I0717 10:38:12.199526    3636 system_pods.go:61] "kindnet-t85bv" [8649cc70-8ce8-4caf-938e-bd253fa5b7ae] Running
	I0717 10:38:12.199530    3636 system_pods.go:61] "kube-apiserver-ha-572000" [6409591d-7a50-414b-be63-44d5ddb0b0e0] Running
	I0717 10:38:12.199532    3636 system_pods.go:61] "kube-apiserver-ha-572000-m02" [d658459c-67f9-4798-87f2-c651d830af35] Running
	I0717 10:38:12.199535    3636 system_pods.go:61] "kube-apiserver-ha-572000-m03" [ac1c02f1-f023-4ad2-99bc-2657fb9d50d7] Running
	I0717 10:38:12.199538    3636 system_pods.go:61] "kube-controller-manager-ha-572000" [075c4d23-04bd-404a-822d-ae7d326a68ac] Running
	I0717 10:38:12.199541    3636 system_pods.go:61] "kube-controller-manager-ha-572000-m02" [3014bdb3-bb86-48d5-a045-2a8dab026bd8] Running
	I0717 10:38:12.199544    3636 system_pods.go:61] "kube-controller-manager-ha-572000-m03" [3b76e220-3a5b-4dac-8f69-6b380799d2ef] Running
	I0717 10:38:12.199546    3636 system_pods.go:61] "kube-proxy-5wcph" [731f5b57-131e-4e97-b47a-036b8d4edbcd] Running
	I0717 10:38:12.199553    3636 system_pods.go:61] "kube-proxy-h7k9z" [cf687084-b40d-43b6-9476-8c60c5f37d1d] Running
	I0717 10:38:12.199557    3636 system_pods.go:61] "kube-proxy-hst7h" [2b7c8d2c-3d71-4357-9429-1d8438c446f5] Running
	I0717 10:38:12.199559    3636 system_pods.go:61] "kube-proxy-v6jxh" [3f952fc8-747f-49da-b400-5212dba538d8] Running
	I0717 10:38:12.199565    3636 system_pods.go:61] "kube-scheduler-ha-572000" [da36a422-ba49-4cd6-8f47-9be7c43657be] Running
	I0717 10:38:12.199568    3636 system_pods.go:61] "kube-scheduler-ha-572000-m02" [13e289a2-8d3c-478f-9220-1d114dd5bf62] Running
	I0717 10:38:12.199571    3636 system_pods.go:61] "kube-scheduler-ha-572000-m03" [428716db-8096-4b57-9f90-e706d5683852] Running
	I0717 10:38:12.199573    3636 system_pods.go:61] "kube-vip-ha-572000" [289bb6df-c101-49d3-9e4a-6c0ecdd26551] Running
	I0717 10:38:12.199576    3636 system_pods.go:61] "kube-vip-ha-572000-m02" [fa57b8d2-bc41-42af-9d9a-1b68f882e6fe] Running
	I0717 10:38:12.199579    3636 system_pods.go:61] "kube-vip-ha-572000-m03" [a35c5007-dfe3-4b58-b16c-d568b8f9c1c2] Running
	I0717 10:38:12.199581    3636 system_pods.go:61] "storage-provisioner" [1801216d-6529-4049-b874-1577132fc03f] Running
	I0717 10:38:12.199585    3636 system_pods.go:74] duration metric: took 190.398086ms to wait for pod list to return data ...
	I0717 10:38:12.199592    3636 default_sa.go:34] waiting for default service account to be created ...
	I0717 10:38:12.388401    3636 request.go:629] Waited for 188.727547ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0717 10:38:12.388434    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0717 10:38:12.388439    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:12.388445    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:12.388449    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:12.390736    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:12.390877    3636 default_sa.go:45] found service account: "default"
	I0717 10:38:12.390886    3636 default_sa.go:55] duration metric: took 191.284842ms for default service account to be created ...
	I0717 10:38:12.390892    3636 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 10:38:12.588992    3636 request.go:629] Waited for 198.054942ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:12.589092    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:12.589101    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:12.589115    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:12.589123    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:12.595003    3636 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 10:38:12.599941    3636 system_pods.go:86] 26 kube-system pods found
	I0717 10:38:12.599953    3636 system_pods.go:89] "coredns-7db6d8ff4d-2phrp" [e76a91cf-8a14-454c-baee-7c0128645e0e] Running
	I0717 10:38:12.599962    3636 system_pods.go:89] "coredns-7db6d8ff4d-9dzd5" [f9a7cff1-56a3-4600-ae2f-a951dda10753] Running
	I0717 10:38:12.599966    3636 system_pods.go:89] "etcd-ha-572000" [2d7b717e-d404-4c63-afe9-799de6711964] Running
	I0717 10:38:12.599970    3636 system_pods.go:89] "etcd-ha-572000-m02" [8abc7662-c159-4953-9aba-11a75b4e7d65] Running
	I0717 10:38:12.599986    3636 system_pods.go:89] "etcd-ha-572000-m03" [78629908-d362-4a27-933f-2b929867c22f] Running
	I0717 10:38:12.599992    3636 system_pods.go:89] "kindnet-5xsrp" [0b84aa4d-7452-47f6-8052-f063e2e8ef7b] Running
	I0717 10:38:12.599996    3636 system_pods.go:89] "kindnet-72zfp" [0cc735c4-08b3-499a-b9e2-8c38377f371b] Running
	I0717 10:38:12.599999    3636 system_pods.go:89] "kindnet-g2m92" [be3d84cf-f9e8-426c-b02a-49eb7eed9d6a] Running
	I0717 10:38:12.600003    3636 system_pods.go:89] "kindnet-t85bv" [8649cc70-8ce8-4caf-938e-bd253fa5b7ae] Running
	I0717 10:38:12.600007    3636 system_pods.go:89] "kube-apiserver-ha-572000" [6409591d-7a50-414b-be63-44d5ddb0b0e0] Running
	I0717 10:38:12.600010    3636 system_pods.go:89] "kube-apiserver-ha-572000-m02" [d658459c-67f9-4798-87f2-c651d830af35] Running
	I0717 10:38:12.600014    3636 system_pods.go:89] "kube-apiserver-ha-572000-m03" [ac1c02f1-f023-4ad2-99bc-2657fb9d50d7] Running
	I0717 10:38:12.600018    3636 system_pods.go:89] "kube-controller-manager-ha-572000" [075c4d23-04bd-404a-822d-ae7d326a68ac] Running
	I0717 10:38:12.600021    3636 system_pods.go:89] "kube-controller-manager-ha-572000-m02" [3014bdb3-bb86-48d5-a045-2a8dab026bd8] Running
	I0717 10:38:12.600024    3636 system_pods.go:89] "kube-controller-manager-ha-572000-m03" [3b76e220-3a5b-4dac-8f69-6b380799d2ef] Running
	I0717 10:38:12.600028    3636 system_pods.go:89] "kube-proxy-5wcph" [731f5b57-131e-4e97-b47a-036b8d4edbcd] Running
	I0717 10:38:12.600031    3636 system_pods.go:89] "kube-proxy-h7k9z" [cf687084-b40d-43b6-9476-8c60c5f37d1d] Running
	I0717 10:38:12.600035    3636 system_pods.go:89] "kube-proxy-hst7h" [2b7c8d2c-3d71-4357-9429-1d8438c446f5] Running
	I0717 10:38:12.600038    3636 system_pods.go:89] "kube-proxy-v6jxh" [3f952fc8-747f-49da-b400-5212dba538d8] Running
	I0717 10:38:12.600041    3636 system_pods.go:89] "kube-scheduler-ha-572000" [da36a422-ba49-4cd6-8f47-9be7c43657be] Running
	I0717 10:38:12.600044    3636 system_pods.go:89] "kube-scheduler-ha-572000-m02" [13e289a2-8d3c-478f-9220-1d114dd5bf62] Running
	I0717 10:38:12.600048    3636 system_pods.go:89] "kube-scheduler-ha-572000-m03" [428716db-8096-4b57-9f90-e706d5683852] Running
	I0717 10:38:12.600051    3636 system_pods.go:89] "kube-vip-ha-572000" [289bb6df-c101-49d3-9e4a-6c0ecdd26551] Running
	I0717 10:38:12.600054    3636 system_pods.go:89] "kube-vip-ha-572000-m02" [fa57b8d2-bc41-42af-9d9a-1b68f882e6fe] Running
	I0717 10:38:12.600058    3636 system_pods.go:89] "kube-vip-ha-572000-m03" [a35c5007-dfe3-4b58-b16c-d568b8f9c1c2] Running
	I0717 10:38:12.600061    3636 system_pods.go:89] "storage-provisioner" [1801216d-6529-4049-b874-1577132fc03f] Running
	I0717 10:38:12.600065    3636 system_pods.go:126] duration metric: took 209.164597ms to wait for k8s-apps to be running ...
	I0717 10:38:12.600076    3636 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 10:38:12.600137    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:38:12.610524    3636 system_svc.go:56] duration metric: took 10.448568ms WaitForService to wait for kubelet
	I0717 10:38:12.610538    3636 kubeadm.go:582] duration metric: took 14.868933199s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:38:12.610564    3636 node_conditions.go:102] verifying NodePressure condition ...
	I0717 10:38:12.789306    3636 request.go:629] Waited for 178.678322ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0717 10:38:12.789427    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0717 10:38:12.789438    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:12.789448    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:12.789457    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:12.793007    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:12.794084    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:38:12.794097    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:38:12.794107    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:38:12.794110    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:38:12.794114    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:38:12.794122    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:38:12.794126    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:38:12.794129    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:38:12.794133    3636 node_conditions.go:105] duration metric: took 183.560156ms to run NodePressure ...
	I0717 10:38:12.794140    3636 start.go:241] waiting for startup goroutines ...
	I0717 10:38:12.794158    3636 start.go:255] writing updated cluster config ...
	I0717 10:38:12.815984    3636 out.go:177] 
	I0717 10:38:12.836616    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:38:12.836683    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:38:12.857448    3636 out.go:177] * Starting "ha-572000-m03" control-plane node in "ha-572000" cluster
	I0717 10:38:12.899463    3636 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:38:12.899506    3636 cache.go:56] Caching tarball of preloaded images
	I0717 10:38:12.899666    3636 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:38:12.899684    3636 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:38:12.899813    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:38:12.900669    3636 start.go:360] acquireMachinesLock for ha-572000-m03: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:38:12.900765    3636 start.go:364] duration metric: took 73.243µs to acquireMachinesLock for "ha-572000-m03"
	I0717 10:38:12.900790    3636 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:38:12.900816    3636 fix.go:54] fixHost starting: m03
	I0717 10:38:12.901158    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:38:12.901182    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:38:12.910100    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51987
	I0717 10:38:12.910428    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:38:12.910808    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:38:12.910824    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:38:12.911027    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:38:12.911151    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:12.911236    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetState
	I0717 10:38:12.911315    3636 main.go:141] libmachine: (ha-572000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:38:12.911405    3636 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid from json: 2972
	I0717 10:38:12.912336    3636 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid 2972 missing from process table
	I0717 10:38:12.912361    3636 fix.go:112] recreateIfNeeded on ha-572000-m03: state=Stopped err=<nil>
	I0717 10:38:12.912369    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	W0717 10:38:12.912452    3636 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:38:12.933536    3636 out.go:177] * Restarting existing hyperkit VM for "ha-572000-m03" ...
	I0717 10:38:12.975448    3636 main.go:141] libmachine: (ha-572000-m03) Calling .Start
	I0717 10:38:12.975666    3636 main.go:141] libmachine: (ha-572000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:38:12.975716    3636 main.go:141] libmachine: (ha-572000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/hyperkit.pid
	I0717 10:38:12.977484    3636 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid 2972 missing from process table
	I0717 10:38:12.977496    3636 main.go:141] libmachine: (ha-572000-m03) DBG | pid 2972 is in state "Stopped"
	I0717 10:38:12.977512    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/hyperkit.pid...
	I0717 10:38:12.977862    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Using UUID 5064fb5d-6e32-4be4-8d75-15b09204e5f5
	I0717 10:38:13.005572    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Generated MAC 6e:d3:62:da:43:cf
	I0717 10:38:13.005591    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:38:13.005736    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5064fb5d-6e32-4be4-8d75-15b09204e5f5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acc00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:38:13.005764    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5064fb5d-6e32-4be4-8d75-15b09204e5f5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acc00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:38:13.005828    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5064fb5d-6e32-4be4-8d75-15b09204e5f5", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/ha-572000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machine
s/ha-572000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:38:13.005888    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5064fb5d-6e32-4be4-8d75-15b09204e5f5 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/ha-572000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:38:13.005909    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:38:13.007252    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 DEBUG: hyperkit: Pid is 3665
	I0717 10:38:13.007703    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Attempt 0
	I0717 10:38:13.007718    3636 main.go:141] libmachine: (ha-572000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:38:13.007809    3636 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid from json: 3665
	I0717 10:38:13.009827    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Searching for 6e:d3:62:da:43:cf in /var/db/dhcpd_leases ...
	I0717 10:38:13.009874    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:38:13.009921    3636 main.go:141] libmachine: (ha-572000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x669952ec}
	I0717 10:38:13.009945    3636 main.go:141] libmachine: (ha-572000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669952da}
	I0717 10:38:13.009959    3636 main.go:141] libmachine: (ha-572000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:37:45:6a:f1:7f ID:1,1e:37:45:6a:f1:7f Lease:0x6698001b}
	I0717 10:38:13.009965    3636 main.go:141] libmachine: (ha-572000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:d3:62:da:43:cf ID:1,6e:d3:62:da:43:cf Lease:0x669950e4}
	I0717 10:38:13.009979    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetConfigRaw
	I0717 10:38:13.009982    3636 main.go:141] libmachine: (ha-572000-m03) DBG | Found match: 6e:d3:62:da:43:cf
	I0717 10:38:13.009992    3636 main.go:141] libmachine: (ha-572000-m03) DBG | IP: 192.169.0.7
	I0717 10:38:13.010657    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetIP
	I0717 10:38:13.010834    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:38:13.011336    3636 machine.go:94] provisionDockerMachine start ...
	I0717 10:38:13.011346    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:13.011471    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:13.011562    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:13.011675    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:13.011768    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:13.011883    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:13.012034    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:13.012203    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:13.012211    3636 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:38:13.014976    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:38:13.023104    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:38:13.024110    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:38:13.024135    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:38:13.024157    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:38:13.024175    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:38:13.404157    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:38:13.404173    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:38:13.519656    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:38:13.519690    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:38:13.519727    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:38:13.519751    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:38:13.520524    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:38:13.520534    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:38:18.810258    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:18 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0717 10:38:18.810297    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:18 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0717 10:38:18.810307    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:18 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0717 10:38:18.834790    3636 main.go:141] libmachine: (ha-572000-m03) DBG | 2024/07/17 10:38:18 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0717 10:38:24.076646    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:38:24.076665    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetMachineName
	I0717 10:38:24.076790    3636 buildroot.go:166] provisioning hostname "ha-572000-m03"
	I0717 10:38:24.076802    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetMachineName
	I0717 10:38:24.076886    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.077024    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.077111    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.077196    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.077278    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.077404    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:24.077556    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:24.077565    3636 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000-m03 && echo "ha-572000-m03" | sudo tee /etc/hostname
	I0717 10:38:24.142857    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000-m03
	
	I0717 10:38:24.142872    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.143001    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.143104    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.143196    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.143280    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.143395    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:24.143539    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:24.143551    3636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:38:24.203331    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:38:24.203349    3636 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:38:24.203359    3636 buildroot.go:174] setting up certificates
	I0717 10:38:24.203364    3636 provision.go:84] configureAuth start
	I0717 10:38:24.203370    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetMachineName
	I0717 10:38:24.203518    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetIP
	I0717 10:38:24.203623    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.203721    3636 provision.go:143] copyHostCerts
	I0717 10:38:24.203751    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:38:24.203800    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:38:24.203806    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:38:24.203931    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:38:24.204144    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:38:24.204174    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:38:24.204179    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:38:24.204294    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:38:24.204463    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:38:24.204496    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:38:24.204500    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:38:24.204570    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:38:24.204726    3636 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000-m03 san=[127.0.0.1 192.169.0.7 ha-572000-m03 localhost minikube]
	I0717 10:38:24.389534    3636 provision.go:177] copyRemoteCerts
	I0717 10:38:24.389582    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:38:24.389597    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.389749    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.389840    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.389936    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.390018    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	I0717 10:38:24.424587    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:38:24.424660    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:38:24.444455    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:38:24.444522    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 10:38:24.465006    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:38:24.465071    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:38:24.485065    3636 provision.go:87] duration metric: took 281.685984ms to configureAuth
	I0717 10:38:24.485079    3636 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:38:24.485254    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:38:24.485268    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:24.485399    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.485509    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.485606    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.485695    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.485780    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.485889    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:24.486018    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:24.486026    3636 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:38:24.539772    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:38:24.539786    3636 buildroot.go:70] root file system type: tmpfs
	I0717 10:38:24.539874    3636 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:38:24.539885    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.540019    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.540102    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.540205    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.540313    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.540462    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:24.540607    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:24.540655    3636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:38:24.605074    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:38:24.605091    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:24.605230    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:24.605339    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.605424    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:24.605494    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:24.605620    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:24.605771    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:24.605784    3636 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:38:26.231394    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:38:26.231416    3636 machine.go:97] duration metric: took 13.21973714s to provisionDockerMachine
	I0717 10:38:26.231428    3636 start.go:293] postStartSetup for "ha-572000-m03" (driver="hyperkit")
	I0717 10:38:26.231437    3636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:38:26.231448    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.231633    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:38:26.231652    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:26.231764    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:26.231872    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.231959    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:26.232054    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	I0717 10:38:26.266647    3636 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:38:26.269791    3636 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:38:26.269801    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:38:26.269897    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:38:26.270060    3636 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:38:26.270067    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:38:26.270227    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:38:26.278127    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:38:26.297704    3636 start.go:296] duration metric: took 66.264765ms for postStartSetup
	I0717 10:38:26.297725    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.297894    3636 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:38:26.297906    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:26.297982    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:26.298095    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.298185    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:26.298259    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	I0717 10:38:26.332566    3636 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:38:26.332629    3636 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:38:26.364567    3636 fix.go:56] duration metric: took 13.463410955s for fixHost
	I0717 10:38:26.364593    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:26.364774    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:26.364878    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.364991    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.365075    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:26.365213    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:38:26.365360    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0717 10:38:26.365368    3636 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:38:26.420992    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237906.507932482
	
	I0717 10:38:26.421006    3636 fix.go:216] guest clock: 1721237906.507932482
	I0717 10:38:26.421017    3636 fix.go:229] Guest: 2024-07-17 10:38:26.507932482 -0700 PDT Remote: 2024-07-17 10:38:26.364583 -0700 PDT m=+65.237237021 (delta=143.349482ms)
	I0717 10:38:26.421032    3636 fix.go:200] guest clock delta is within tolerance: 143.349482ms
	I0717 10:38:26.421036    3636 start.go:83] releasing machines lock for "ha-572000-m03", held for 13.519917261s
	I0717 10:38:26.421054    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.421181    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetIP
	I0717 10:38:26.443010    3636 out.go:177] * Found network options:
	I0717 10:38:26.464409    3636 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0717 10:38:26.487460    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:38:26.487486    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:38:26.487503    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.488209    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.488434    3636 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:38:26.488546    3636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:38:26.488583    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	W0717 10:38:26.488701    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:38:26.488736    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:38:26.488809    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:26.488843    3636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:38:26.488855    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:38:26.489040    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.489074    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:38:26.489211    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:26.489222    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:38:26.489320    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	I0717 10:38:26.489386    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:38:26.489533    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	W0717 10:38:26.520778    3636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:38:26.520842    3636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:38:26.572109    3636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:38:26.572138    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:38:26.572238    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:38:26.587958    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:38:26.596058    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:38:26.604066    3636 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:38:26.604116    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:38:26.612485    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:38:26.620942    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:38:26.629083    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:38:26.637275    3636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:38:26.645515    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:38:26.653717    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:38:26.662055    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:38:26.670484    3636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:38:26.677700    3636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:38:26.684962    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:26.781787    3636 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:38:26.802958    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:38:26.803029    3636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:38:26.827692    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:38:26.840860    3636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:38:26.869195    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:38:26.881705    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:38:26.892987    3636 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:38:26.911733    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:38:26.922817    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:38:26.938911    3636 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:38:26.941995    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:38:26.951587    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:38:26.965318    3636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:38:27.062809    3636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:38:27.181748    3636 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:38:27.181774    3636 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:38:27.195694    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:27.293396    3636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:38:29.632743    3636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.339268733s)
	I0717 10:38:29.632812    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:38:29.643610    3636 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 10:38:29.657480    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:38:29.668578    3636 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:38:29.772887    3636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:38:29.887343    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:29.983127    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:38:29.998340    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:38:30.010843    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:30.124553    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:38:30.193605    3636 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:38:30.193684    3636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:38:30.198773    3636 start.go:563] Will wait 60s for crictl version
	I0717 10:38:30.198857    3636 ssh_runner.go:195] Run: which crictl
	I0717 10:38:30.202846    3636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:38:30.233816    3636 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:38:30.233915    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:38:30.253337    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:38:30.311688    3636 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:38:30.384020    3636 out.go:177]   - env NO_PROXY=192.169.0.5
	I0717 10:38:30.444054    3636 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0717 10:38:30.480967    3636 main.go:141] libmachine: (ha-572000-m03) Calling .GetIP
	I0717 10:38:30.481248    3636 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:38:30.485047    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:38:30.495793    3636 mustload.go:65] Loading cluster: ha-572000
	I0717 10:38:30.495976    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:38:30.496198    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:38:30.496221    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:38:30.505198    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52009
	I0717 10:38:30.505558    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:38:30.505932    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:38:30.505942    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:38:30.506222    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:38:30.506342    3636 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:38:30.506437    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:38:30.506526    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3650
	I0717 10:38:30.507493    3636 host.go:66] Checking if "ha-572000" exists ...
	I0717 10:38:30.507764    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:38:30.507798    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:38:30.516606    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52011
	I0717 10:38:30.516943    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:38:30.517270    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:38:30.517281    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:38:30.517513    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:38:30.517630    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:38:30.517732    3636 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.7
	I0717 10:38:30.517737    3636 certs.go:194] generating shared ca certs ...
	I0717 10:38:30.517751    3636 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:38:30.517912    3636 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:38:30.517964    3636 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:38:30.517973    3636 certs.go:256] generating profile certs ...
	I0717 10:38:30.518074    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key
	I0717 10:38:30.518169    3636 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.562e5459
	I0717 10:38:30.518222    3636 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key
	I0717 10:38:30.518229    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:38:30.518253    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:38:30.518273    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:38:30.518296    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:38:30.518321    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 10:38:30.518340    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 10:38:30.518358    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 10:38:30.518375    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 10:38:30.518476    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:38:30.518520    3636 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:38:30.518529    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:38:30.518566    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:38:30.518602    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:38:30.518634    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:38:30.518702    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:38:30.518736    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:38:30.518764    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:38:30.518783    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:38:30.518808    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:38:30.518899    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:38:30.518987    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:38:30.519076    3636 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:38:30.519152    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:38:30.544343    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 10:38:30.547913    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 10:38:30.557636    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 10:38:30.561333    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0717 10:38:30.570252    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 10:38:30.573631    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 10:38:30.582360    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 10:38:30.585629    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0717 10:38:30.593318    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 10:38:30.596412    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 10:38:30.604690    3636 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 10:38:30.607967    3636 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 10:38:30.616462    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:38:30.638619    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:38:30.660075    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:38:30.679834    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:38:30.699712    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 10:38:30.720095    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 10:38:30.740379    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 10:38:30.760837    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 10:38:30.780662    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:38:30.800982    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:38:30.821007    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:38:30.841019    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 10:38:30.855040    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0717 10:38:30.868897    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 10:38:30.882296    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0717 10:38:30.895884    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 10:38:30.909514    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 10:38:30.923253    3636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 10:38:30.937006    3636 ssh_runner.go:195] Run: openssl version
	I0717 10:38:30.941436    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:38:30.950257    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:38:30.955139    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:38:30.955192    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:38:30.959572    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:38:30.968160    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:38:30.976579    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:38:30.980025    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:38:30.980067    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:38:30.984288    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:38:30.992609    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:38:31.001221    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:38:31.004796    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:38:31.004841    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:38:31.009065    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:38:31.017464    3636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:38:31.021030    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 10:38:31.025586    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 10:38:31.029983    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 10:38:31.034293    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 10:38:31.038625    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 10:38:31.042961    3636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 10:38:31.047275    3636 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.30.2 docker true true} ...
	I0717 10:38:31.047334    3636 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:38:31.047351    3636 kube-vip.go:115] generating kube-vip config ...
	I0717 10:38:31.047388    3636 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 10:38:31.059333    3636 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 10:38:31.059386    3636 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 10:38:31.059445    3636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:38:31.067249    3636 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:38:31.067300    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 10:38:31.075304    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0717 10:38:31.088747    3636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:38:31.102087    3636 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0717 10:38:31.115605    3636 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0717 10:38:31.118396    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:38:31.128499    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:31.224486    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:38:31.238639    3636 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:38:31.238848    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:38:31.259920    3636 out.go:177] * Verifying Kubernetes components...
	I0717 10:38:31.280661    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:38:31.399137    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:38:31.415018    3636 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:38:31.415346    3636 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa0a8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 10:38:31.415404    3636 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0717 10:38:31.415666    3636 node_ready.go:35] waiting up to 6m0s for node "ha-572000-m03" to be "Ready" ...
	I0717 10:38:31.415725    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:38:31.415732    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.415740    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.415745    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.421957    3636 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 10:38:31.422260    3636 node_ready.go:49] node "ha-572000-m03" has status "Ready":"True"
	I0717 10:38:31.422274    3636 node_ready.go:38] duration metric: took 6.596243ms for node "ha-572000-m03" to be "Ready" ...
	I0717 10:38:31.422281    3636 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:38:31.422331    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:38:31.422337    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.422343    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.422347    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.431073    3636 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 10:38:31.436681    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:38:31.436766    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:31.436772    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.436778    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.436782    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.440248    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:31.440722    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:31.440730    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.440735    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.440738    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.442939    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:31.937618    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:31.937636    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.937668    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.937673    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.940388    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:31.940820    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:31.940828    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:31.940834    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:31.940838    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:31.943159    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:32.437866    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:32.437879    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:32.437885    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:32.437888    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:32.446284    3636 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 10:38:32.446927    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:32.446936    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:32.446943    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:32.446948    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:32.452237    3636 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 10:38:32.937878    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:32.937890    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:32.937896    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:32.937901    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:32.940439    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:32.941049    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:32.941057    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:32.941064    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:32.941080    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:32.943760    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:33.437735    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:33.437751    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:33.437757    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:33.437760    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:33.440741    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:33.441277    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:33.441285    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:33.441291    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:33.441302    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:33.443897    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:33.444546    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:33.938758    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:33.938781    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:33.938787    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:33.938791    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:33.941068    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:33.941437    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:33.941445    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:33.941451    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:33.941462    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:33.943283    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:34.437334    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:34.437347    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:34.437354    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:34.437357    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:34.440066    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:34.440546    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:34.440554    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:34.440560    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:34.440563    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:34.442659    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:34.938574    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:34.938586    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:34.938593    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:34.938602    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:34.941243    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:34.941810    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:34.941818    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:34.941824    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:34.941827    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:34.943881    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:35.437928    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:35.437948    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:35.437959    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:35.437965    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:35.441416    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:35.441923    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:35.441931    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:35.441937    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:35.441941    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:35.443781    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:35.937111    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:35.937132    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:35.937144    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:35.937149    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:35.941097    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:35.941689    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:35.941702    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:35.941708    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:35.941711    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:35.943483    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:35.943912    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:36.437284    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:36.437298    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:36.437304    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:36.437308    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:36.439570    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:36.440110    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:36.440117    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:36.440127    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:36.440130    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:36.441781    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:36.938251    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:36.938279    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:36.938357    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:36.938372    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:36.941451    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:36.942095    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:36.942103    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:36.942109    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:36.942112    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:36.943809    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:37.438234    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:37.438246    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:37.438251    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:37.438256    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:37.440243    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:37.440651    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:37.440658    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:37.440664    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:37.440674    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:37.442390    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:37.938519    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:37.938538    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:37.938588    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:37.938592    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:37.940708    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:37.941242    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:37.941250    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:37.941256    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:37.941260    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:37.942969    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:38.437210    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:38.437229    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:38.437263    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:38.437275    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:38.440621    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:38.441113    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:38.441120    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:38.441126    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:38.441130    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:38.444813    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:38.445187    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:38.937338    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:38.937354    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:38.937363    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:38.937368    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:38.939598    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:38.940020    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:38.940027    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:38.940033    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:38.940038    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:38.941562    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:39.437538    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:39.437553    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:39.437563    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:39.437566    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:39.439993    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:39.440392    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:39.440400    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:39.440405    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:39.440408    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:39.442187    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:39.938827    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:39.938859    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:39.938867    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:39.938871    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:39.941007    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:39.941470    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:39.941477    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:39.941482    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:39.941486    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:39.943155    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:40.437526    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:40.437540    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:40.437546    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:40.437550    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:40.439587    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:40.440056    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:40.440063    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:40.440068    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:40.440072    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:40.441961    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:40.937672    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:40.937688    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:40.937697    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:40.937701    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:40.940217    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:40.940568    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:40.940576    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:40.940581    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:40.940585    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:40.942351    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:40.942718    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:41.437331    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:41.437344    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:41.437350    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:41.437354    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:41.439766    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:41.440280    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:41.440287    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:41.440293    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:41.440296    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:41.441965    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:41.938758    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:41.938778    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:41.938790    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:41.938798    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:41.941589    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:41.942137    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:41.942146    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:41.942152    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:41.942157    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:41.943723    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:42.438172    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:42.438185    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:42.438194    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:42.438198    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:42.440429    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:42.440980    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:42.440988    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:42.440994    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:42.440998    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:42.442893    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:42.938134    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:42.938172    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:42.938183    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:42.938191    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:42.940744    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:42.941114    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:42.941122    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:42.941127    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:42.941131    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:42.942787    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:42.943905    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:43.438163    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:43.438195    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:43.438217    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:43.438224    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:43.440858    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:43.441272    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:43.441279    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:43.441288    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:43.441292    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:43.443069    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:43.937578    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:43.937589    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:43.937596    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:43.937599    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:43.939582    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:43.940136    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:43.940144    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:43.940150    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:43.940152    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:43.941646    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:44.437231    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:44.437244    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:44.437250    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:44.437254    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:44.439651    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:44.440190    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:44.440197    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:44.440202    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:44.440206    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:44.442158    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:44.937185    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:44.937196    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:44.937203    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:44.937206    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:44.939361    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:44.939788    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:44.939796    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:44.939802    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:44.939805    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:44.941482    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:45.437377    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:45.437392    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:45.437401    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:45.437406    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:45.439768    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:45.440303    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:45.440311    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:45.440317    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:45.440320    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:45.441925    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:45.442312    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:45.939181    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:45.939236    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:45.939246    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:45.939253    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:45.941938    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:45.942549    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:45.942557    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:45.942563    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:45.942566    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:45.944281    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:46.437228    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:46.437238    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:46.437245    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:46.437248    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:46.439099    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:46.439744    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:46.439751    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:46.439757    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:46.439760    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:46.441200    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:46.938133    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:46.938186    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:46.938196    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:46.938202    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:46.940467    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:46.940876    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:46.940884    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:46.940890    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:46.940893    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:46.942527    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:47.437838    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:47.437850    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:47.437857    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:47.437861    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:47.440152    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:47.440651    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:47.440660    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:47.440665    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:47.440669    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:47.442745    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:47.443107    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:47.937851    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:47.937867    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:47.937873    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:47.937876    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:47.940047    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:47.940510    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:47.940517    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:47.940523    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:47.940530    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:47.942242    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:48.439255    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:48.439310    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:48.439329    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:48.439338    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:48.442468    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:48.443256    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:48.443264    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:48.443269    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:48.443272    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:48.444868    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:48.937733    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:48.937744    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:48.937750    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:48.937753    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:48.939731    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:48.940190    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:48.940198    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:48.940204    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:48.940207    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:48.941747    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:49.438149    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:49.438169    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:49.438181    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:49.438190    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:49.441135    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:49.441712    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:49.441721    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:49.441726    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:49.441738    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:49.443421    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:49.443800    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:49.937835    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:49.937887    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:49.937895    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:49.937905    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:49.940121    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:49.940667    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:49.940674    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:49.940680    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:49.940698    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:49.942630    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:50.438458    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:50.438469    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:50.438476    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:50.438483    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:50.440697    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:50.441412    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:50.441420    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:50.441426    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:50.441430    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:50.443161    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:50.937976    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:50.937995    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:50.938003    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:50.938009    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:50.940796    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:50.941307    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:50.941315    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:50.941320    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:50.941323    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:50.943029    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:51.437692    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:51.437705    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:51.437714    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:51.437720    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:51.440262    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:51.440918    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:51.440926    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:51.440932    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:51.440936    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:51.442631    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:51.937774    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:51.937792    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:51.937801    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:51.937807    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:51.940276    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:51.940668    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:51.940675    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:51.940681    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:51.940685    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:51.942296    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:51.942616    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:52.438854    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:52.438878    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:52.438892    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:52.438900    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:52.442008    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:52.442522    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:52.442530    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:52.442536    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:52.442540    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:52.444262    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:52.937664    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:52.937675    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:52.937684    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:52.937687    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:52.939825    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:52.940415    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:52.940422    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:52.940428    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:52.940432    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:52.942064    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:53.439277    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:53.439300    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:53.439309    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:53.439315    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:53.441705    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:53.442130    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:53.442138    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:53.442143    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:53.442146    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:53.443926    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:53.938741    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:53.938755    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:53.938785    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:53.938790    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:53.941015    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:53.941672    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:53.941680    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:53.941685    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:53.941689    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:53.943953    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:53.944413    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:54.438636    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:54.438654    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:54.438663    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:54.438668    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:54.441262    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:54.441677    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:54.441684    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:54.441690    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:54.441693    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:54.443309    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:54.938770    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:54.938788    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:54.938798    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:54.938802    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:54.941486    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:54.941877    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:54.941884    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:54.941890    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:54.941893    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:54.943590    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:55.438030    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:55.438049    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:55.438059    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:55.438064    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:55.440706    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:55.441272    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:55.441280    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:55.441289    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:55.441292    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:55.443295    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:55.938147    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:55.938203    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:55.938215    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:55.938222    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:55.940270    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:55.940729    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:55.940737    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:55.940742    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:55.940745    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:55.942359    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:56.437637    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:56.437654    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:56.437666    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:56.437671    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:56.440401    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:56.440900    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:56.440909    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:56.440916    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:56.440920    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:56.442737    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:56.443083    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:56.938496    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:56.938521    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:56.938533    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:56.938541    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:56.941967    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:38:56.942683    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:56.942691    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:56.942697    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:56.942707    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:56.944542    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:57.438317    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:57.438392    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:57.438405    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:57.438411    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:57.441323    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:57.441768    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:57.441776    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:57.441780    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:57.441793    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:57.443513    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:57.937977    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:57.937990    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:57.937996    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:57.938000    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:57.940155    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:57.940631    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:57.940639    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:57.940645    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:57.940650    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:57.942518    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:58.438589    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:58.438606    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:58.438612    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:58.438615    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:58.440808    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:58.441401    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:58.441409    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:58.441415    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:58.441423    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:58.443141    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:58.443478    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:38:58.938651    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:58.938670    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:58.938679    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:58.938683    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:58.940981    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:58.941414    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:58.941422    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:58.941428    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:58.941431    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:58.943207    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:59.437795    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:59.437809    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:59.437815    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:59.437819    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:59.440022    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:59.440439    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:59.440446    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:59.440452    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:59.440457    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:59.442209    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:38:59.938380    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:38:59.938393    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:59.938400    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:59.938403    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:59.940648    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:38:59.941030    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:38:59.941038    3636 round_trippers.go:469] Request Headers:
	I0717 10:38:59.941044    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:38:59.941048    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:38:59.942631    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:00.437586    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:00.437607    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:00.437616    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:00.437621    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:00.440082    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:00.440574    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:00.440582    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:00.440588    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:00.440591    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:00.442224    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:00.939171    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:00.939189    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:00.939198    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:00.939203    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:00.941658    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:00.942057    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:00.942065    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:00.942071    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:00.942075    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:00.943872    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:00.944304    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:01.438420    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:01.438444    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:01.438462    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:01.438475    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:01.441885    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:01.442448    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:01.442456    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:01.442462    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:01.442473    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:01.444325    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:01.937741    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:01.937759    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:01.937769    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:01.937774    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:01.941004    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:01.941638    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:01.941645    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:01.941651    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:01.941655    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:01.943421    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:02.439464    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:02.439515    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:02.439539    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:02.439547    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:02.442788    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:02.443568    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:02.443575    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:02.443581    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:02.443584    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:02.445070    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:02.939355    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:02.939398    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:02.939423    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:02.939432    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:02.943288    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:02.943786    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:02.943793    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:02.943798    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:02.943808    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:02.945549    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:02.945918    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:03.437814    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:03.437833    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:03.437846    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:03.437852    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:03.440696    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:03.441473    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:03.441481    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:03.441487    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:03.441494    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:03.443180    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:03.938154    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:03.938171    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:03.938179    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:03.938185    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:03.940749    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:03.941323    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:03.941330    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:03.941336    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:03.941338    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:03.942986    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:04.438509    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:04.438533    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:04.438544    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:04.438552    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:04.441587    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:04.442338    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:04.442346    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:04.442351    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:04.442354    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:04.443865    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:04.939464    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:04.939517    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:04.939527    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:04.939530    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:04.941589    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:04.942132    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:04.942139    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:04.942144    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:04.942147    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:04.943787    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:05.437854    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:05.437866    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:05.437872    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:05.437875    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:05.439895    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:05.440295    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:05.440303    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:05.440308    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:05.440312    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:05.441766    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:05.442130    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:05.937813    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:05.937871    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:05.937882    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:05.937888    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:05.940367    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:05.940885    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:05.940892    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:05.940898    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:05.940902    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:05.942721    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:06.438966    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:06.438991    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:06.439007    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:06.439020    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:06.442137    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:06.442785    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:06.442793    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:06.442799    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:06.442802    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:06.444436    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:06.938695    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:06.938714    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:06.938723    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:06.938727    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:06.941327    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:06.941790    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:06.941798    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:06.941802    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:06.941805    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:06.943432    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:07.438469    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:07.438553    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:07.438567    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:07.438573    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:07.442157    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:07.442736    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:07.442744    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:07.442750    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:07.442754    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:07.444281    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:07.444696    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:07.937804    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:07.937815    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:07.937821    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:07.937823    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:07.939794    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:07.940418    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:07.940426    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:07.940432    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:07.940435    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:07.942179    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:08.437799    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:08.437814    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:08.437821    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:08.437827    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:08.440300    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:08.440760    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:08.440768    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:08.440773    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:08.440776    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:08.442402    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:08.938764    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:08.938789    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:08.938896    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:08.938909    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:08.942041    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:08.942737    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:08.942744    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:08.942751    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:08.942754    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:08.944691    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:09.437781    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:09.437795    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:09.437802    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:09.437807    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:09.440310    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:09.440716    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:09.440725    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:09.440731    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:09.440741    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:09.442571    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:09.937834    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:09.937847    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:09.937853    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:09.937856    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:09.939731    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:09.940144    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:09.940153    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:09.940159    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:09.940163    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:09.941982    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:09.942266    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:10.438403    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:10.438414    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:10.438421    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:10.438424    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:10.440749    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:10.441120    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:10.441127    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:10.441133    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:10.441138    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:10.442757    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:10.939169    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:10.939227    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:10.939238    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:10.939244    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:10.942004    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:10.942575    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:10.942582    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:10.942588    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:10.942591    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:10.944436    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:11.438251    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:11.438276    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:11.438353    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:11.438364    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:11.441421    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:11.441961    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:11.441969    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:11.441975    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:11.441979    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:11.446242    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:39:11.938022    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:11.938033    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:11.938040    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:11.938044    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:11.939924    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:11.940511    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:11.940519    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:11.940525    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:11.940528    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:11.942450    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:11.942833    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:12.439246    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:12.439269    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:12.439279    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:12.439285    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:12.442445    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:12.443020    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:12.443027    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:12.443033    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:12.443037    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:12.444778    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:12.939028    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:12.939059    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:12.939075    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:12.939144    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:12.941663    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:12.942169    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:12.942176    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:12.942182    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:12.942198    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:12.944174    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:13.439017    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:13.439030    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:13.439036    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:13.439039    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:13.441436    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:13.442003    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:13.442011    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:13.442017    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:13.442020    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:13.443715    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:13.939125    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:13.939138    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:13.939150    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:13.939154    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:13.941396    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:13.942124    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:13.942133    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:13.942138    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:13.942141    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:13.943860    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:13.944207    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:14.439525    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:14.439539    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:14.439545    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:14.439549    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:14.441636    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:14.442072    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:14.442080    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:14.442085    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:14.442088    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:14.443727    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:14.938392    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:14.938412    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:14.938425    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:14.938431    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:14.941839    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:14.942527    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:14.942535    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:14.942541    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:14.942556    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:14.944390    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:15.439124    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:15.439154    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:15.439236    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:15.439243    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:15.442572    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:15.443123    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:15.443133    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:15.443141    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:15.443145    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:15.445133    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:15.938789    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:15.938855    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:15.938870    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:15.938877    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:15.941774    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:15.942286    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:15.942294    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:15.942300    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:15.942304    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:15.944348    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:15.944660    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:16.439349    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:16.439368    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:16.439378    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:16.439383    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:16.441938    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:16.442524    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:16.442532    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:16.442537    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:16.442548    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:16.444186    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:16.938018    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:16.938067    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:16.938075    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:16.938081    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:16.940227    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:16.940771    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:16.940780    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:16.940785    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:16.940789    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:16.942609    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:17.438002    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:17.438028    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:17.438034    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:17.438038    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:17.440220    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:17.440724    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:17.440733    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:17.440739    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:17.440742    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:17.442604    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:17.938219    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:17.938237    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:17.938249    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:17.938255    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:17.941281    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:17.941690    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:17.941698    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:17.941703    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:17.941707    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:17.943715    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:18.439167    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:18.439186    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:18.439195    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:18.439200    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:18.441725    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:18.442096    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:18.442104    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:18.442109    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:18.442113    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:18.443738    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:18.444159    3636 pod_ready.go:102] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"False"
	I0717 10:39:18.939393    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:18.939469    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:18.939479    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:18.939485    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:18.941987    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:18.942423    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:18.942431    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:18.942436    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:18.942439    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:18.944249    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:19.438795    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:39:19.438808    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.438814    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.438816    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.441023    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:19.441456    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:19.441464    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.441470    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.441475    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.443744    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:19.444095    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.444104    3636 pod_ready.go:81] duration metric: took 48.006189425s for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.444111    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.444150    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9dzd5
	I0717 10:39:19.444154    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.444160    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.444165    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.447092    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:19.447847    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:19.447856    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.447861    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.447865    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.449618    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:19.449899    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.449908    3636 pod_ready.go:81] duration metric: took 5.792129ms for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.449915    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.449950    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000
	I0717 10:39:19.449955    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.449961    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.449966    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.451887    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:19.452242    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:19.452249    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.452255    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.452259    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.455734    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:19.456038    3636 pod_ready.go:92] pod "etcd-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.456048    3636 pod_ready.go:81] duration metric: took 6.128452ms for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.456055    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.456091    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m02
	I0717 10:39:19.456096    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.456102    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.456104    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.459121    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:19.459474    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:19.459482    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.459487    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.459491    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.461049    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:19.461321    3636 pod_ready.go:92] pod "etcd-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.461330    3636 pod_ready.go:81] duration metric: took 5.269541ms for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.461337    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.461367    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m03
	I0717 10:39:19.461373    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.461378    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.461381    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.463280    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:19.463738    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:19.463745    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.463750    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.463754    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.466609    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:19.466864    3636 pod_ready.go:92] pod "etcd-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.466874    3636 pod_ready.go:81] duration metric: took 5.532002ms for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.466885    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.640514    3636 request.go:629] Waited for 173.589043ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:39:19.640593    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:39:19.640602    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.640610    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.640614    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.643241    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:19.839100    3636 request.go:629] Waited for 195.343311ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:19.839145    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:19.839152    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:19.839188    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:19.839194    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:19.845230    3636 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 10:39:19.845548    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:19.845558    3636 pod_ready.go:81] duration metric: took 378.657463ms for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:19.845565    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:20.040239    3636 request.go:629] Waited for 194.632219ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:39:20.040319    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:39:20.040328    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:20.040336    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:20.040342    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:20.042714    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:20.240297    3636 request.go:629] Waited for 196.995157ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:20.240378    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:20.240384    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:20.240390    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:20.240396    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:20.242369    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:20.242695    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:20.242704    3636 pod_ready.go:81] duration metric: took 397.124019ms for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:20.242711    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:20.439359    3636 request.go:629] Waited for 196.544114ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:39:20.439408    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:39:20.439416    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:20.439427    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:20.439434    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:20.442435    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:20.638955    3636 request.go:629] Waited for 196.048572ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:20.639046    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:20.639056    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:20.639068    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:20.639075    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:20.642008    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:20.642430    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:20.642442    3636 pod_ready.go:81] duration metric: took 399.714561ms for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:20.642451    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:20.838986    3636 request.go:629] Waited for 196.455933ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:39:20.839106    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:39:20.839119    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:20.839131    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:20.839141    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:20.842621    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:21.039118    3636 request.go:629] Waited for 195.900542ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:21.039165    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:21.039176    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:21.039188    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:21.039196    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:21.042149    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:21.042711    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:21.042741    3636 pod_ready.go:81] duration metric: took 400.268935ms for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:21.042748    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:21.238981    3636 request.go:629] Waited for 196.178207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:39:21.239040    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:39:21.239051    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:21.239063    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:21.239071    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:21.242170    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:21.440519    3636 request.go:629] Waited for 197.63517ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:21.440569    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:21.440581    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:21.440597    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:21.440606    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:21.443784    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:21.444203    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:21.444212    3636 pod_ready.go:81] duration metric: took 401.448672ms for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:21.444219    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:21.640166    3636 request.go:629] Waited for 195.890355ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:39:21.640224    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:39:21.640235    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:21.640246    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:21.640254    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:21.643178    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:21.840025    3636 request.go:629] Waited for 196.38625ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:21.840077    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:21.840087    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:21.840099    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:21.840107    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:21.842881    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:21.843340    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:21.843349    3636 pod_ready.go:81] duration metric: took 399.115148ms for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:21.843356    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:22.038929    3636 request.go:629] Waited for 195.527396ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:39:22.038980    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:39:22.038991    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:22.039000    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:22.039006    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:22.041797    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:22.239447    3636 request.go:629] Waited for 196.85315ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:39:22.239496    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:39:22.239504    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:22.239515    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:22.239525    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:22.242443    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:22.242932    3636 pod_ready.go:97] node "ha-572000-m04" hosting pod "kube-proxy-5wcph" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-572000-m04" has status "Ready":"Unknown"
	I0717 10:39:22.242948    3636 pod_ready.go:81] duration metric: took 399.575996ms for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	E0717 10:39:22.242956    3636 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-572000-m04" hosting pod "kube-proxy-5wcph" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-572000-m04" has status "Ready":"Unknown"
	I0717 10:39:22.242964    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:22.439269    3636 request.go:629] Waited for 196.255356ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:39:22.439393    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:39:22.439403    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:22.439414    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:22.439420    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:22.442456    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:22.640394    3636 request.go:629] Waited for 197.266214ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:22.640491    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:22.640500    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:22.640509    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:22.640514    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:22.643031    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:22.643471    3636 pod_ready.go:92] pod "kube-proxy-h7k9z" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:22.643480    3636 pod_ready.go:81] duration metric: took 400.50076ms for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:22.643487    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:22.839377    3636 request.go:629] Waited for 195.844443ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:39:22.839468    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:39:22.839477    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:22.839485    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:22.839491    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:22.841921    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:23.039004    3636 request.go:629] Waited for 196.604394ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:23.039109    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:23.039120    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:23.039131    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:23.039138    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:23.042022    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:23.042449    3636 pod_ready.go:92] pod "kube-proxy-hst7h" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:23.042462    3636 pod_ready.go:81] duration metric: took 398.959822ms for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:23.042480    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:23.240001    3636 request.go:629] Waited for 197.469314ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:39:23.240093    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:39:23.240110    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:23.240121    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:23.240131    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:23.243284    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:23.439300    3636 request.go:629] Waited for 195.300943ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:23.439332    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:23.439336    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:23.439343    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:23.439370    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:23.441287    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:39:23.441722    3636 pod_ready.go:92] pod "kube-proxy-v6jxh" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:23.441732    3636 pod_ready.go:81] duration metric: took 399.23495ms for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:23.441739    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:23.638943    3636 request.go:629] Waited for 197.165268ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:39:23.639000    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:39:23.639006    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:23.639012    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:23.639017    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:23.641044    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:23.840535    3636 request.go:629] Waited for 199.126882ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:23.840627    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:39:23.840639    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:23.840679    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:23.840691    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:23.843464    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:23.843963    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:23.843976    3636 pod_ready.go:81] duration metric: took 402.220047ms for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:23.843984    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:24.039540    3636 request.go:629] Waited for 195.50331ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:39:24.039598    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:39:24.039670    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.039685    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.039691    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.042477    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:24.239459    3636 request.go:629] Waited for 196.457492ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:24.239561    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:39:24.239573    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.239585    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.239591    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.242659    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:24.243312    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:24.243327    3636 pod_ready.go:81] duration metric: took 399.325407ms for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:24.243336    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:24.439080    3636 request.go:629] Waited for 195.673891ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:39:24.439191    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:39:24.439202    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.439213    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.439223    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.443262    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:39:24.639182    3636 request.go:629] Waited for 195.517919ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:24.639292    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:39:24.639303    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.639316    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.639324    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.642200    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:39:24.642657    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:39:24.642666    3636 pod_ready.go:81] duration metric: took 399.31371ms for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:39:24.642674    3636 pod_ready.go:38] duration metric: took 53.219035328s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:39:24.642686    3636 api_server.go:52] waiting for apiserver process to appear ...
	I0717 10:39:24.642749    3636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:39:24.655291    3636 api_server.go:72] duration metric: took 53.415271815s to wait for apiserver process to appear ...
	I0717 10:39:24.655303    3636 api_server.go:88] waiting for apiserver healthz status ...
	I0717 10:39:24.655313    3636 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0717 10:39:24.659504    3636 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0717 10:39:24.659539    3636 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0717 10:39:24.659544    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.659549    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.659552    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.660035    3636 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:39:24.660129    3636 api_server.go:141] control plane version: v1.30.2
	I0717 10:39:24.660138    3636 api_server.go:131] duration metric: took 4.830633ms to wait for apiserver health ...
	I0717 10:39:24.660142    3636 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 10:39:24.840282    3636 request.go:629] Waited for 180.099076ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:39:24.840353    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:39:24.840361    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:24.840369    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:24.840373    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:24.845121    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:39:24.850038    3636 system_pods.go:59] 26 kube-system pods found
	I0717 10:39:24.850051    3636 system_pods.go:61] "coredns-7db6d8ff4d-2phrp" [e76a91cf-8a14-454c-baee-7c0128645e0e] Running
	I0717 10:39:24.850054    3636 system_pods.go:61] "coredns-7db6d8ff4d-9dzd5" [f9a7cff1-56a3-4600-ae2f-a951dda10753] Running
	I0717 10:39:24.850057    3636 system_pods.go:61] "etcd-ha-572000" [2d7b717e-d404-4c63-afe9-799de6711964] Running
	I0717 10:39:24.850060    3636 system_pods.go:61] "etcd-ha-572000-m02" [8abc7662-c159-4953-9aba-11a75b4e7d65] Running
	I0717 10:39:24.850062    3636 system_pods.go:61] "etcd-ha-572000-m03" [78629908-d362-4a27-933f-2b929867c22f] Running
	I0717 10:39:24.850065    3636 system_pods.go:61] "kindnet-5xsrp" [0b84aa4d-7452-47f6-8052-f063e2e8ef7b] Running
	I0717 10:39:24.850067    3636 system_pods.go:61] "kindnet-72zfp" [0cc735c4-08b3-499a-b9e2-8c38377f371b] Running
	I0717 10:39:24.850069    3636 system_pods.go:61] "kindnet-g2m92" [be3d84cf-f9e8-426c-b02a-49eb7eed9d6a] Running
	I0717 10:39:24.850071    3636 system_pods.go:61] "kindnet-t85bv" [8649cc70-8ce8-4caf-938e-bd253fa5b7ae] Running
	I0717 10:39:24.850074    3636 system_pods.go:61] "kube-apiserver-ha-572000" [6409591d-7a50-414b-be63-44d5ddb0b0e0] Running
	I0717 10:39:24.850076    3636 system_pods.go:61] "kube-apiserver-ha-572000-m02" [d658459c-67f9-4798-87f2-c651d830af35] Running
	I0717 10:39:24.850078    3636 system_pods.go:61] "kube-apiserver-ha-572000-m03" [ac1c02f1-f023-4ad2-99bc-2657fb9d50d7] Running
	I0717 10:39:24.850081    3636 system_pods.go:61] "kube-controller-manager-ha-572000" [075c4d23-04bd-404a-822d-ae7d326a68ac] Running
	I0717 10:39:24.850084    3636 system_pods.go:61] "kube-controller-manager-ha-572000-m02" [3014bdb3-bb86-48d5-a045-2a8dab026bd8] Running
	I0717 10:39:24.850086    3636 system_pods.go:61] "kube-controller-manager-ha-572000-m03" [3b76e220-3a5b-4dac-8f69-6b380799d2ef] Running
	I0717 10:39:24.850088    3636 system_pods.go:61] "kube-proxy-5wcph" [731f5b57-131e-4e97-b47a-036b8d4edbcd] Running
	I0717 10:39:24.850105    3636 system_pods.go:61] "kube-proxy-h7k9z" [cf687084-b40d-43b6-9476-8c60c5f37d1d] Running
	I0717 10:39:24.850110    3636 system_pods.go:61] "kube-proxy-hst7h" [2b7c8d2c-3d71-4357-9429-1d8438c446f5] Running
	I0717 10:39:24.850113    3636 system_pods.go:61] "kube-proxy-v6jxh" [3f952fc8-747f-49da-b400-5212dba538d8] Running
	I0717 10:39:24.850116    3636 system_pods.go:61] "kube-scheduler-ha-572000" [da36a422-ba49-4cd6-8f47-9be7c43657be] Running
	I0717 10:39:24.850118    3636 system_pods.go:61] "kube-scheduler-ha-572000-m02" [13e289a2-8d3c-478f-9220-1d114dd5bf62] Running
	I0717 10:39:24.850121    3636 system_pods.go:61] "kube-scheduler-ha-572000-m03" [428716db-8096-4b57-9f90-e706d5683852] Running
	I0717 10:39:24.850124    3636 system_pods.go:61] "kube-vip-ha-572000" [57c4f54d-e859-4429-90dd-0402a9e73727] Running
	I0717 10:39:24.850127    3636 system_pods.go:61] "kube-vip-ha-572000-m02" [fa57b8d2-bc41-42af-9d9a-1b68f882e6fe] Running
	I0717 10:39:24.850129    3636 system_pods.go:61] "kube-vip-ha-572000-m03" [a35c5007-dfe3-4b58-b16c-d568b8f9c1c2] Running
	I0717 10:39:24.850133    3636 system_pods.go:61] "storage-provisioner" [1801216d-6529-4049-b874-1577132fc03f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 10:39:24.850139    3636 system_pods.go:74] duration metric: took 189.987862ms to wait for pod list to return data ...
	I0717 10:39:24.850145    3636 default_sa.go:34] waiting for default service account to be created ...
	I0717 10:39:25.040731    3636 request.go:629] Waited for 190.528349ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0717 10:39:25.040830    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0717 10:39:25.040841    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:25.040852    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:25.040860    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:25.044018    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:25.044088    3636 default_sa.go:45] found service account: "default"
	I0717 10:39:25.044097    3636 default_sa.go:55] duration metric: took 193.941803ms for default service account to be created ...
	I0717 10:39:25.044103    3636 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 10:39:25.240503    3636 request.go:629] Waited for 196.351718ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:39:25.240543    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:39:25.240548    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:25.240554    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:25.240583    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:25.244975    3636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:39:25.249908    3636 system_pods.go:86] 26 kube-system pods found
	I0717 10:39:25.249919    3636 system_pods.go:89] "coredns-7db6d8ff4d-2phrp" [e76a91cf-8a14-454c-baee-7c0128645e0e] Running
	I0717 10:39:25.249923    3636 system_pods.go:89] "coredns-7db6d8ff4d-9dzd5" [f9a7cff1-56a3-4600-ae2f-a951dda10753] Running
	I0717 10:39:25.249940    3636 system_pods.go:89] "etcd-ha-572000" [2d7b717e-d404-4c63-afe9-799de6711964] Running
	I0717 10:39:25.249944    3636 system_pods.go:89] "etcd-ha-572000-m02" [8abc7662-c159-4953-9aba-11a75b4e7d65] Running
	I0717 10:39:25.249948    3636 system_pods.go:89] "etcd-ha-572000-m03" [78629908-d362-4a27-933f-2b929867c22f] Running
	I0717 10:39:25.249951    3636 system_pods.go:89] "kindnet-5xsrp" [0b84aa4d-7452-47f6-8052-f063e2e8ef7b] Running
	I0717 10:39:25.249955    3636 system_pods.go:89] "kindnet-72zfp" [0cc735c4-08b3-499a-b9e2-8c38377f371b] Running
	I0717 10:39:25.249959    3636 system_pods.go:89] "kindnet-g2m92" [be3d84cf-f9e8-426c-b02a-49eb7eed9d6a] Running
	I0717 10:39:25.249962    3636 system_pods.go:89] "kindnet-t85bv" [8649cc70-8ce8-4caf-938e-bd253fa5b7ae] Running
	I0717 10:39:25.249966    3636 system_pods.go:89] "kube-apiserver-ha-572000" [6409591d-7a50-414b-be63-44d5ddb0b0e0] Running
	I0717 10:39:25.249969    3636 system_pods.go:89] "kube-apiserver-ha-572000-m02" [d658459c-67f9-4798-87f2-c651d830af35] Running
	I0717 10:39:25.249973    3636 system_pods.go:89] "kube-apiserver-ha-572000-m03" [ac1c02f1-f023-4ad2-99bc-2657fb9d50d7] Running
	I0717 10:39:25.249976    3636 system_pods.go:89] "kube-controller-manager-ha-572000" [075c4d23-04bd-404a-822d-ae7d326a68ac] Running
	I0717 10:39:25.249979    3636 system_pods.go:89] "kube-controller-manager-ha-572000-m02" [3014bdb3-bb86-48d5-a045-2a8dab026bd8] Running
	I0717 10:39:25.249983    3636 system_pods.go:89] "kube-controller-manager-ha-572000-m03" [3b76e220-3a5b-4dac-8f69-6b380799d2ef] Running
	I0717 10:39:25.249987    3636 system_pods.go:89] "kube-proxy-5wcph" [731f5b57-131e-4e97-b47a-036b8d4edbcd] Running
	I0717 10:39:25.249990    3636 system_pods.go:89] "kube-proxy-h7k9z" [cf687084-b40d-43b6-9476-8c60c5f37d1d] Running
	I0717 10:39:25.249994    3636 system_pods.go:89] "kube-proxy-hst7h" [2b7c8d2c-3d71-4357-9429-1d8438c446f5] Running
	I0717 10:39:25.249997    3636 system_pods.go:89] "kube-proxy-v6jxh" [3f952fc8-747f-49da-b400-5212dba538d8] Running
	I0717 10:39:25.250001    3636 system_pods.go:89] "kube-scheduler-ha-572000" [da36a422-ba49-4cd6-8f47-9be7c43657be] Running
	I0717 10:39:25.250005    3636 system_pods.go:89] "kube-scheduler-ha-572000-m02" [13e289a2-8d3c-478f-9220-1d114dd5bf62] Running
	I0717 10:39:25.250008    3636 system_pods.go:89] "kube-scheduler-ha-572000-m03" [428716db-8096-4b57-9f90-e706d5683852] Running
	I0717 10:39:25.250012    3636 system_pods.go:89] "kube-vip-ha-572000" [57c4f54d-e859-4429-90dd-0402a9e73727] Running
	I0717 10:39:25.250019    3636 system_pods.go:89] "kube-vip-ha-572000-m02" [fa57b8d2-bc41-42af-9d9a-1b68f882e6fe] Running
	I0717 10:39:25.250026    3636 system_pods.go:89] "kube-vip-ha-572000-m03" [a35c5007-dfe3-4b58-b16c-d568b8f9c1c2] Running
	I0717 10:39:25.250031    3636 system_pods.go:89] "storage-provisioner" [1801216d-6529-4049-b874-1577132fc03f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 10:39:25.250037    3636 system_pods.go:126] duration metric: took 205.924043ms to wait for k8s-apps to be running ...
	I0717 10:39:25.250043    3636 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 10:39:25.250097    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:39:25.260730    3636 system_svc.go:56] duration metric: took 10.680441ms WaitForService to wait for kubelet
	I0717 10:39:25.260752    3636 kubeadm.go:582] duration metric: took 54.020711767s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:39:25.260767    3636 node_conditions.go:102] verifying NodePressure condition ...
	I0717 10:39:25.440260    3636 request.go:629] Waited for 179.444294ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0717 10:39:25.440305    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0717 10:39:25.440313    3636 round_trippers.go:469] Request Headers:
	I0717 10:39:25.440326    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:39:25.440335    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:39:25.443664    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:39:25.444820    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:39:25.444830    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:39:25.444839    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:39:25.444842    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:39:25.444845    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:39:25.444848    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:39:25.444851    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:39:25.444854    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:39:25.444857    3636 node_conditions.go:105] duration metric: took 184.081224ms to run NodePressure ...
	I0717 10:39:25.444866    3636 start.go:241] waiting for startup goroutines ...
	I0717 10:39:25.444881    3636 start.go:255] writing updated cluster config ...
	I0717 10:39:25.466841    3636 out.go:177] 
	I0717 10:39:25.488444    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:39:25.488557    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:39:25.511165    3636 out.go:177] * Starting "ha-572000-m04" worker node in "ha-572000" cluster
	I0717 10:39:25.553049    3636 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:39:25.553078    3636 cache.go:56] Caching tarball of preloaded images
	I0717 10:39:25.553293    3636 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:39:25.553311    3636 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:39:25.553441    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:39:25.554263    3636 start.go:360] acquireMachinesLock for ha-572000-m04: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:39:25.554357    3636 start.go:364] duration metric: took 71.034µs to acquireMachinesLock for "ha-572000-m04"
	I0717 10:39:25.554380    3636 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:39:25.554388    3636 fix.go:54] fixHost starting: m04
	I0717 10:39:25.554780    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:39:25.554805    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:39:25.564043    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52015
	I0717 10:39:25.564385    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:39:25.564752    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:39:25.564769    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:39:25.564963    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:39:25.565075    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:39:25.565158    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetState
	I0717 10:39:25.565257    3636 main.go:141] libmachine: (ha-572000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:39:25.565368    3636 main.go:141] libmachine: (ha-572000-m04) DBG | hyperkit pid from json: 3096
	I0717 10:39:25.566303    3636 main.go:141] libmachine: (ha-572000-m04) DBG | hyperkit pid 3096 missing from process table
	I0717 10:39:25.566325    3636 fix.go:112] recreateIfNeeded on ha-572000-m04: state=Stopped err=<nil>
	I0717 10:39:25.566334    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	W0717 10:39:25.566413    3636 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:39:25.587318    3636 out.go:177] * Restarting existing hyperkit VM for "ha-572000-m04" ...
	I0717 10:39:25.629121    3636 main.go:141] libmachine: (ha-572000-m04) Calling .Start
	I0717 10:39:25.629280    3636 main.go:141] libmachine: (ha-572000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:39:25.629323    3636 main.go:141] libmachine: (ha-572000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/hyperkit.pid
	I0717 10:39:25.629373    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Using UUID d62b35de-5f9d-4091-a1f9-ae55052b3d93
	I0717 10:39:25.659758    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Generated MAC 1e:37:45:6a:f1:7f
	I0717 10:39:25.659780    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
	I0717 10:39:25.659921    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d62b35de-5f9d-4091-a1f9-ae55052b3d93", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002879b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:39:25.659979    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d62b35de-5f9d-4091-a1f9-ae55052b3d93", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002879b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0717 10:39:25.660027    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "d62b35de-5f9d-4091-a1f9-ae55052b3d93", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/ha-572000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machine
s/ha-572000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
	I0717 10:39:25.660072    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U d62b35de-5f9d-4091-a1f9-ae55052b3d93 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/ha-572000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
	I0717 10:39:25.660086    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:39:25.661465    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 DEBUG: hyperkit: Pid is 3683
	I0717 10:39:25.661986    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Attempt 0
	I0717 10:39:25.661995    3636 main.go:141] libmachine: (ha-572000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:39:25.662068    3636 main.go:141] libmachine: (ha-572000-m04) DBG | hyperkit pid from json: 3683
	I0717 10:39:25.664876    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Searching for 1e:37:45:6a:f1:7f in /var/db/dhcpd_leases ...
	I0717 10:39:25.665000    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0717 10:39:25.665028    3636 main.go:141] libmachine: (ha-572000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:d3:62:da:43:cf ID:1,6e:d3:62:da:43:cf Lease:0x6699530d}
	I0717 10:39:25.665090    3636 main.go:141] libmachine: (ha-572000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x669952ec}
	I0717 10:39:25.665098    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetConfigRaw
	I0717 10:39:25.665107    3636 main.go:141] libmachine: (ha-572000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669952da}
	I0717 10:39:25.665121    3636 main.go:141] libmachine: (ha-572000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:37:45:6a:f1:7f ID:1,1e:37:45:6a:f1:7f Lease:0x6698001b}
	I0717 10:39:25.665133    3636 main.go:141] libmachine: (ha-572000-m04) DBG | Found match: 1e:37:45:6a:f1:7f
	I0717 10:39:25.665155    3636 main.go:141] libmachine: (ha-572000-m04) DBG | IP: 192.169.0.8
	I0717 10:39:25.665871    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetIP
	I0717 10:39:25.666075    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
	I0717 10:39:25.666480    3636 machine.go:94] provisionDockerMachine start ...
	I0717 10:39:25.666492    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:39:25.666622    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:39:25.666758    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:39:25.666855    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:39:25.666997    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:39:25.667100    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:39:25.667218    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:39:25.667397    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:39:25.667404    3636 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:39:25.669640    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:39:25.678044    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:39:25.679048    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:39:25.679102    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:39:25.679117    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:39:25.679129    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:39:26.061153    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:39:26.061169    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:39:26.176025    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:39:26.176085    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:39:26.176109    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:39:26.176141    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:39:26.176817    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:39:26.176827    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:39:31.459017    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:31 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:39:31.459116    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:31 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:39:31.459128    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:31 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:39:31.482911    3636 main.go:141] libmachine: (ha-572000-m04) DBG | 2024/07/17 10:39:31 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:40:00.729304    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:40:00.729320    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetMachineName
	I0717 10:40:00.729447    3636 buildroot.go:166] provisioning hostname "ha-572000-m04"
	I0717 10:40:00.729459    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetMachineName
	I0717 10:40:00.729548    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:00.729650    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:00.729752    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:00.729829    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:00.729922    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:00.730060    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:00.730229    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:00.730238    3636 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-572000-m04 && echo "ha-572000-m04" | sudo tee /etc/hostname
	I0717 10:40:00.792250    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000-m04
	
	I0717 10:40:00.792267    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:00.792395    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:00.792496    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:00.792601    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:00.792686    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:00.792813    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:00.792953    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:00.792965    3636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-572000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-572000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:40:00.851570    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:40:00.851592    3636 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:40:00.851608    3636 buildroot.go:174] setting up certificates
	I0717 10:40:00.851614    3636 provision.go:84] configureAuth start
	I0717 10:40:00.851621    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetMachineName
	I0717 10:40:00.851754    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetIP
	I0717 10:40:00.851843    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:00.851935    3636 provision.go:143] copyHostCerts
	I0717 10:40:00.851965    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:40:00.852026    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:40:00.852032    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:40:00.852183    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:40:00.852421    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:40:00.852465    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:40:00.852470    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:40:00.852549    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:40:00.852695    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:40:00.852734    3636 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:40:00.852739    3636 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:40:00.852814    3636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:40:00.852963    3636 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000-m04 san=[127.0.0.1 192.169.0.8 ha-572000-m04 localhost minikube]
	I0717 10:40:01.012731    3636 provision.go:177] copyRemoteCerts
	I0717 10:40:01.012781    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:40:01.012796    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:01.012945    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:01.013036    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.013118    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:01.013205    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	I0717 10:40:01.045440    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:40:01.045513    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 10:40:01.065877    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:40:01.065952    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:40:01.086341    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:40:01.086417    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:40:01.107237    3636 provision.go:87] duration metric: took 255.607467ms to configureAuth
	I0717 10:40:01.107252    3636 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:40:01.107441    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:40:01.107470    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:01.107602    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:01.107691    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:01.107775    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.107862    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.107936    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:01.108052    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:01.108176    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:01.108184    3636 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:40:01.159812    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:40:01.159826    3636 buildroot.go:70] root file system type: tmpfs
	I0717 10:40:01.159906    3636 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:40:01.159918    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:01.160045    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:01.160133    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.160218    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.160312    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:01.160436    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:01.160588    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:01.160638    3636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:40:01.222986    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:40:01.223013    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:01.223158    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:01.223263    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.223339    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:01.223425    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:01.223557    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:01.223705    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:01.223717    3636 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:40:02.793231    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:40:02.793247    3636 machine.go:97] duration metric: took 37.125816173s to provisionDockerMachine
	I0717 10:40:02.793256    3636 start.go:293] postStartSetup for "ha-572000-m04" (driver="hyperkit")
	I0717 10:40:02.793263    3636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:40:02.793273    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:02.793461    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:40:02.793475    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:02.793570    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:02.793662    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:02.793746    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:02.793821    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	I0717 10:40:02.826174    3636 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:40:02.829517    3636 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:40:02.829527    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:40:02.829627    3636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:40:02.829814    3636 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:40:02.829820    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:40:02.830025    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:40:02.837723    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:40:02.858109    3636 start.go:296] duration metric: took 64.843134ms for postStartSetup
	I0717 10:40:02.858164    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:02.858343    3636 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 10:40:02.858357    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:02.858452    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:02.858535    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:02.858625    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:02.858709    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	I0717 10:40:02.891466    3636 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0717 10:40:02.891526    3636 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0717 10:40:02.924508    3636 fix.go:56] duration metric: took 37.369170253s for fixHost
	I0717 10:40:02.924533    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:02.924664    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:02.924753    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:02.924844    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:02.924927    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:02.925043    3636 main.go:141] libmachine: Using SSH client type: native
	I0717 10:40:02.925181    3636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8c04060] 0x8c06dc0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0717 10:40:02.925189    3636 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:40:02.979156    3636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721238002.907586801
	
	I0717 10:40:02.979168    3636 fix.go:216] guest clock: 1721238002.907586801
	I0717 10:40:02.979174    3636 fix.go:229] Guest: 2024-07-17 10:40:02.907586801 -0700 PDT Remote: 2024-07-17 10:40:02.924523 -0700 PDT m=+161.794729692 (delta=-16.936199ms)
	I0717 10:40:02.979185    3636 fix.go:200] guest clock delta is within tolerance: -16.936199ms
	I0717 10:40:02.979189    3636 start.go:83] releasing machines lock for "ha-572000-m04", held for 37.423872596s
	I0717 10:40:02.979207    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:02.979341    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetIP
	I0717 10:40:03.002677    3636 out.go:177] * Found network options:
	I0717 10:40:03.023433    3636 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0717 10:40:03.044600    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:40:03.044630    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:40:03.044645    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:40:03.044662    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:03.045380    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:03.045584    3636 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:40:03.045691    3636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:40:03.045739    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	W0717 10:40:03.045803    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:40:03.045829    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:40:03.045847    3636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:40:03.045916    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:03.045932    3636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:40:03.045950    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:40:03.046116    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:03.046197    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:40:03.046277    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:03.046336    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:40:03.046416    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	I0717 10:40:03.046472    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:40:03.046583    3636 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	W0717 10:40:03.078338    3636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:40:03.078404    3636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:40:03.127460    3636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:40:03.127478    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:40:03.127562    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:40:03.143174    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:40:03.152039    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:40:03.160575    3636 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:40:03.160636    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:40:03.169267    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:40:03.178061    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:40:03.186799    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:40:03.195713    3636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:40:03.205361    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:40:03.214887    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:40:03.223632    3636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:40:03.232306    3636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:40:03.240303    3636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:40:03.248146    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:03.349118    3636 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:40:03.368632    3636 start.go:495] detecting cgroup driver to use...
	I0717 10:40:03.368697    3636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:40:03.382935    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:40:03.394904    3636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:40:03.408677    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:40:03.424538    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:40:03.436679    3636 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:40:03.457267    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:40:03.468621    3636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:40:03.484458    3636 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:40:03.487477    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:40:03.495866    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:40:03.509467    3636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:40:03.610005    3636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:40:03.711300    3636 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:40:03.711330    3636 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:40:03.725314    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:03.818685    3636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:40:06.069148    3636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.250387117s)
	I0717 10:40:06.069225    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:40:06.080064    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:40:06.090634    3636 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:40:06.182522    3636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:40:06.285041    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:06.397211    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:40:06.410586    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:40:06.421941    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:06.525211    3636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:40:06.593566    3636 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:40:06.593658    3636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:40:06.598237    3636 start.go:563] Will wait 60s for crictl version
	I0717 10:40:06.598298    3636 ssh_runner.go:195] Run: which crictl
	I0717 10:40:06.601369    3636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:40:06.630287    3636 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:40:06.630357    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:40:06.648217    3636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:40:06.713331    3636 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:40:06.734501    3636 out.go:177]   - env NO_PROXY=192.169.0.5
	I0717 10:40:06.755443    3636 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0717 10:40:06.776545    3636 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	I0717 10:40:06.797619    3636 main.go:141] libmachine: (ha-572000-m04) Calling .GetIP
	I0717 10:40:06.797849    3636 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:40:06.801369    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:40:06.811681    3636 mustload.go:65] Loading cluster: ha-572000
	I0717 10:40:06.811867    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:40:06.812096    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:40:06.812120    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:40:06.821106    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52038
	I0717 10:40:06.821460    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:40:06.821823    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:40:06.821839    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:40:06.822045    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:40:06.822158    3636 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:40:06.822237    3636 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:40:06.822325    3636 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3650
	I0717 10:40:06.823304    3636 host.go:66] Checking if "ha-572000" exists ...
	I0717 10:40:06.823558    3636 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:40:06.823583    3636 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:40:06.832052    3636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52040
	I0717 10:40:06.832422    3636 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:40:06.832722    3636 main.go:141] libmachine: Using API Version  1
	I0717 10:40:06.832733    3636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:40:06.832924    3636 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:40:06.833068    3636 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:40:06.833173    3636 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.8
	I0717 10:40:06.833178    3636 certs.go:194] generating shared ca certs ...
	I0717 10:40:06.833187    3636 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:40:06.833369    3636 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:40:06.833445    3636 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:40:06.833455    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:40:06.833477    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:40:06.833496    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:40:06.833513    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:40:06.833602    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:40:06.833654    3636 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:40:06.833664    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:40:06.833699    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:40:06.833731    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:40:06.833765    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:40:06.833830    3636 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:40:06.833866    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:40:06.833895    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:40:06.833914    3636 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:40:06.833943    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:40:06.854528    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:40:06.874473    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:40:06.894419    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:40:06.914655    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:40:06.934481    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:40:06.953938    3636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:40:06.973423    3636 ssh_runner.go:195] Run: openssl version
	I0717 10:40:06.977846    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:40:06.987226    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:40:06.990594    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:40:06.990633    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:40:06.994910    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:40:07.004316    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:40:07.013700    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:40:07.017207    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:40:07.017252    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:40:07.021661    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:40:07.030891    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:40:07.040013    3636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:40:07.043424    3636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:40:07.043460    3636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:40:07.048023    3636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:40:07.057292    3636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:40:07.060465    3636 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 10:40:07.060498    3636 kubeadm.go:934] updating node {m04 192.169.0.8 0 v1.30.2 docker false true} ...
	I0717 10:40:07.060568    3636 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:40:07.060612    3636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:40:07.068828    3636 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:40:07.068888    3636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0717 10:40:07.077989    3636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0717 10:40:07.091753    3636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:40:07.105613    3636 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0717 10:40:07.108527    3636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:40:07.118827    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:07.218618    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:40:07.232580    3636 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0717 10:40:07.232780    3636 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:40:07.270354    3636 out.go:177] * Verifying Kubernetes components...
	I0717 10:40:07.343786    3636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:40:07.486955    3636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:40:07.502599    3636 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:40:07.502930    3636 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa0a8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 10:40:07.502990    3636 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0717 10:40:07.503236    3636 node_ready.go:35] waiting up to 6m0s for node "ha-572000-m04" to be "Ready" ...
	I0717 10:40:07.503290    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:40:07.503296    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.503303    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.503305    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.507147    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:07.507598    3636 node_ready.go:49] node "ha-572000-m04" has status "Ready":"True"
	I0717 10:40:07.507619    3636 node_ready.go:38] duration metric: took 4.370479ms for node "ha-572000-m04" to be "Ready" ...
	I0717 10:40:07.507631    3636 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:40:07.507695    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0717 10:40:07.507705    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.507714    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.507718    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.517761    3636 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0717 10:40:07.525740    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.525796    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2phrp
	I0717 10:40:07.525804    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.525810    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.525815    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.527956    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:07.528370    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:07.528378    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.528384    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.528387    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.530521    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:07.530888    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:07.530899    3636 pod_ready.go:81] duration metric: took 5.142557ms for pod "coredns-7db6d8ff4d-2phrp" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.530907    3636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.530969    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9dzd5
	I0717 10:40:07.530978    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.530985    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.530990    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.533172    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:07.533578    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:07.533586    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.533592    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.533595    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.535152    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.535453    3636 pod_ready.go:92] pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:07.535462    3636 pod_ready.go:81] duration metric: took 4.549454ms for pod "coredns-7db6d8ff4d-9dzd5" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.535469    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.535504    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000
	I0717 10:40:07.535509    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.535515    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.535519    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.537042    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.537410    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:07.537417    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.537423    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.537426    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.538975    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.539323    3636 pod_ready.go:92] pod "etcd-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:07.539331    3636 pod_ready.go:81] duration metric: took 3.856623ms for pod "etcd-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.539337    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.539378    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m02
	I0717 10:40:07.539383    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.539389    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.539393    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.541081    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.541459    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:07.541467    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.541473    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.541476    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.542992    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.543383    3636 pod_ready.go:92] pod "etcd-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:07.543391    3636 pod_ready.go:81] duration metric: took 4.050033ms for pod "etcd-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.543397    3636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.703505    3636 request.go:629] Waited for 160.066521ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m03
	I0717 10:40:07.703540    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-572000-m03
	I0717 10:40:07.703545    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.703551    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.703556    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.705548    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:07.903510    3636 request.go:629] Waited for 197.511686ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:07.903551    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:07.903556    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:07.903562    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:07.903601    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:07.905857    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:07.906157    3636 pod_ready.go:92] pod "etcd-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:07.906168    3636 pod_ready.go:81] duration metric: took 362.756768ms for pod "etcd-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:07.906180    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:08.103966    3636 request.go:629] Waited for 197.743139ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:40:08.104021    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000
	I0717 10:40:08.104030    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:08.104037    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:08.104046    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:08.106066    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:08.303534    3636 request.go:629] Waited for 196.774341ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:08.303599    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:08.303671    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:08.303686    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:08.303697    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:08.306313    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:08.306837    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:08.306847    3636 pod_ready.go:81] duration metric: took 400.65093ms for pod "kube-apiserver-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:08.306854    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:08.503920    3636 request.go:629] Waited for 197.018157ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:40:08.503964    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m02
	I0717 10:40:08.503984    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:08.503990    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:08.503995    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:08.506056    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:08.703436    3636 request.go:629] Waited for 196.948288ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:08.703494    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:08.703500    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:08.703506    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:08.703511    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:08.705852    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:08.706163    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:08.706173    3636 pod_ready.go:81] duration metric: took 399.30321ms for pod "kube-apiserver-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:08.706179    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:08.903771    3636 request.go:629] Waited for 197.50006ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:40:08.903806    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-572000-m03
	I0717 10:40:08.903813    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:08.903820    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:08.903824    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:08.906399    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:09.104084    3636 request.go:629] Waited for 197.163497ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:09.104171    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:09.104176    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:09.104182    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:09.104187    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:09.106361    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:09.106707    3636 pod_ready.go:92] pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:09.106718    3636 pod_ready.go:81] duration metric: took 400.52413ms for pod "kube-apiserver-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:09.106726    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:09.304052    3636 request.go:629] Waited for 197.283261ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:40:09.304088    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000
	I0717 10:40:09.304093    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:09.304130    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:09.304135    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:09.306083    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:09.504106    3636 request.go:629] Waited for 197.645757ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:09.504208    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:09.504220    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:09.504232    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:09.504240    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:09.511286    3636 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 10:40:09.511696    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:09.511709    3636 pod_ready.go:81] duration metric: took 404.967221ms for pod "kube-controller-manager-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:09.511716    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:09.703585    3636 request.go:629] Waited for 191.795231ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:40:09.703642    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m02
	I0717 10:40:09.703653    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:09.703665    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:09.703671    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:09.706720    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:09.904070    3636 request.go:629] Waited for 196.771647ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:09.904118    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:09.904125    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:09.904134    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:09.904140    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:09.906439    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:09.906766    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:09.906776    3636 pod_ready.go:81] duration metric: took 395.046014ms for pod "kube-controller-manager-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:09.906787    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:10.104935    3636 request.go:629] Waited for 198.017235ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:40:10.105019    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-572000-m03
	I0717 10:40:10.105031    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:10.105061    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:10.105068    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:10.108223    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:10.304013    3636 request.go:629] Waited for 195.251924ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:10.304073    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:10.304086    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:10.304097    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:10.304106    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:10.307327    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:10.307882    3636 pod_ready.go:92] pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:10.307891    3636 pod_ready.go:81] duration metric: took 401.08706ms for pod "kube-controller-manager-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:10.307899    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:10.504739    3636 request.go:629] Waited for 196.801571ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:40:10.504780    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wcph
	I0717 10:40:10.504821    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:10.504827    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:10.504831    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:10.506960    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:10.703733    3636 request.go:629] Waited for 196.095597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:40:10.703831    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m04
	I0717 10:40:10.703840    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:10.703866    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:10.703875    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:10.706696    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:10.707101    3636 pod_ready.go:92] pod "kube-proxy-5wcph" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:10.707111    3636 pod_ready.go:81] duration metric: took 399.196595ms for pod "kube-proxy-5wcph" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:10.707118    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:10.903773    3636 request.go:629] Waited for 196.61026ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:40:10.903910    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h7k9z
	I0717 10:40:10.903927    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:10.903945    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:10.903955    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:10.906117    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:11.104247    3636 request.go:629] Waited for 197.64653ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:11.104330    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:11.104339    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:11.104351    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:11.104362    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:11.107473    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:11.107930    3636 pod_ready.go:92] pod "kube-proxy-h7k9z" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:11.107945    3636 pod_ready.go:81] duration metric: took 400.810357ms for pod "kube-proxy-h7k9z" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:11.107954    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:11.304083    3636 request.go:629] Waited for 196.074281ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:40:11.304132    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hst7h
	I0717 10:40:11.304139    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:11.304147    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:11.304151    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:11.306391    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:11.503460    3636 request.go:629] Waited for 196.558235ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:11.503507    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:11.503513    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:11.503519    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:11.503523    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:11.505457    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:11.505774    3636 pod_ready.go:92] pod "kube-proxy-hst7h" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:11.505785    3636 pod_ready.go:81] duration metric: took 397.815014ms for pod "kube-proxy-hst7h" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:11.505792    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:11.704821    3636 request.go:629] Waited for 198.981688ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:40:11.704918    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v6jxh
	I0717 10:40:11.704927    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:11.704933    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:11.704936    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:11.707262    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:11.903612    3636 request.go:629] Waited for 195.874248ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:11.903682    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:11.903689    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:11.903696    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:11.903700    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:11.905982    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:11.906348    3636 pod_ready.go:92] pod "kube-proxy-v6jxh" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:11.906359    3636 pod_ready.go:81] duration metric: took 400.551047ms for pod "kube-proxy-v6jxh" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:11.906369    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:12.103492    3636 request.go:629] Waited for 197.075685ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:40:12.103568    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000
	I0717 10:40:12.103574    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:12.103580    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:12.103585    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:12.105506    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:12.303814    3636 request.go:629] Waited for 197.930746ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:12.303844    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000
	I0717 10:40:12.303850    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:12.303867    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:12.303874    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:12.305845    3636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:40:12.306164    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:12.306174    3636 pod_ready.go:81] duration metric: took 399.787712ms for pod "kube-scheduler-ha-572000" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:12.306181    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:12.503949    3636 request.go:629] Waited for 197.718801ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:40:12.504068    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m02
	I0717 10:40:12.504079    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:12.504087    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:12.504093    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:12.506372    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:12.704852    3636 request.go:629] Waited for 198.155745ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:12.704924    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m02
	I0717 10:40:12.704932    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:12.704940    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:12.704944    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:12.707307    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:12.707616    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:12.707626    3636 pod_ready.go:81] duration metric: took 401.429815ms for pod "kube-scheduler-ha-572000-m02" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:12.707633    3636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:12.903728    3636 request.go:629] Waited for 196.035029ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:40:12.903828    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-572000-m03
	I0717 10:40:12.903836    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:12.903842    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:12.903845    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:12.906224    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:13.103515    3636 request.go:629] Waited for 196.951957ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:13.103588    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-572000-m03
	I0717 10:40:13.103593    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:13.103599    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:13.103603    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:13.105622    3636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:40:13.106020    3636 pod_ready.go:92] pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 10:40:13.106029    3636 pod_ready.go:81] duration metric: took 398.380033ms for pod "kube-scheduler-ha-572000-m03" in "kube-system" namespace to be "Ready" ...
	I0717 10:40:13.106046    3636 pod_ready.go:38] duration metric: took 5.59825813s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:40:13.106061    3636 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 10:40:13.106113    3636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:40:13.116872    3636 system_svc.go:56] duration metric: took 10.807598ms WaitForService to wait for kubelet
	I0717 10:40:13.116887    3636 kubeadm.go:582] duration metric: took 5.884130758s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:40:13.116904    3636 node_conditions.go:102] verifying NodePressure condition ...
	I0717 10:40:13.303772    3636 request.go:629] Waited for 186.81691ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0717 10:40:13.303803    3636 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0717 10:40:13.303807    3636 round_trippers.go:469] Request Headers:
	I0717 10:40:13.303841    3636 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:40:13.303846    3636 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:40:13.306895    3636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:40:13.307714    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:40:13.307729    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:40:13.307740    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:40:13.307744    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:40:13.307748    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:40:13.307751    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:40:13.307757    3636 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:40:13.307761    3636 node_conditions.go:123] node cpu capacity is 2
	I0717 10:40:13.307764    3636 node_conditions.go:105] duration metric: took 190.851869ms to run NodePressure ...
	I0717 10:40:13.307772    3636 start.go:241] waiting for startup goroutines ...
	I0717 10:40:13.307786    3636 start.go:255] writing updated cluster config ...
	I0717 10:40:13.308139    3636 ssh_runner.go:195] Run: rm -f paused
	I0717 10:40:13.349733    3636 start.go:600] kubectl: 1.29.2, cluster: 1.30.2 (minor skew: 1)
	I0717 10:40:13.371543    3636 out.go:177] * Done! kubectl is now configured to use "ha-572000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 17 17:38:43 ha-572000 dockerd[1183]: time="2024-07-17T17:38:43.319450280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.340195606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.340255461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.340333620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.340397061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.341315078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.341404694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.341501856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.343515271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.343612113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.343637500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.343972230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:44 ha-572000 dockerd[1183]: time="2024-07-17T17:38:44.346166794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:45 ha-572000 dockerd[1183]: time="2024-07-17T17:38:45.310104278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:38:45 ha-572000 dockerd[1183]: time="2024-07-17T17:38:45.310177463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:38:45 ha-572000 dockerd[1183]: time="2024-07-17T17:38:45.310195349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:38:45 ha-572000 dockerd[1183]: time="2024-07-17T17:38:45.310377303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:39:13 ha-572000 dockerd[1176]: time="2024-07-17T17:39:13.526781737Z" level=info msg="ignoring event" container=a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 17:39:13 ha-572000 dockerd[1183]: time="2024-07-17T17:39:13.527422614Z" level=info msg="shim disconnected" id=a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403 namespace=moby
	Jul 17 17:39:13 ha-572000 dockerd[1183]: time="2024-07-17T17:39:13.527577585Z" level=warning msg="cleaning up after shim disconnected" id=a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403 namespace=moby
	Jul 17 17:39:13 ha-572000 dockerd[1183]: time="2024-07-17T17:39:13.527671021Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 17 17:40:35 ha-572000 dockerd[1183]: time="2024-07-17T17:40:35.340652733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:40:35 ha-572000 dockerd[1183]: time="2024-07-17T17:40:35.340734956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:40:35 ha-572000 dockerd[1183]: time="2024-07-17T17:40:35.340749170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:40:35 ha-572000 dockerd[1183]: time="2024-07-17T17:40:35.341115504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f904e7fbc3286       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   be6e24303245d       storage-provisioner
	0544a7b38aa20       cbb01a7bd410d                                                                                         3 minutes ago        Running             coredns                   1                   211b5a6515354       coredns-7db6d8ff4d-9dzd5
	2f15e40a181ae       53c535741fb44                                                                                         3 minutes ago        Running             kube-proxy                1                   4aab8735c2c04       kube-proxy-hst7h
	a5d6b6937bc80       8c811b4aec35f                                                                                         3 minutes ago        Running             busybox                   1                   24dc28c9171d4       busybox-fc5497c4f-5r4wl
	90d12ecf2a207       5cc3abe5717db                                                                                         3 minutes ago        Running             kindnet-cni               1                   c4ad8ae388e4c       kindnet-t85bv
	a82cf6255e5a9       6e38f40d628db                                                                                         3 minutes ago        Exited              storage-provisioner       1                   be6e24303245d       storage-provisioner
	22dbe2e88f6f6       cbb01a7bd410d                                                                                         3 minutes ago        Running             coredns                   1                   ebfbe4a086eb8       coredns-7db6d8ff4d-2phrp
	d0c5e4f0005b0       e874818b3caac                                                                                         3 minutes ago        Running             kube-controller-manager   6                   3143df977771c       kube-controller-manager-ha-572000
	2988c5a570cb1       38af8ddebf499                                                                                         4 minutes ago        Running             kube-vip                  1                   bb35c323d1311       kube-vip-ha-572000
	b589feb3cd968       7820c83aa1394                                                                                         4 minutes ago        Running             kube-scheduler            2                   1f36c956df9c2       kube-scheduler-ha-572000
	c4604d37a9454       3861cfcd7c04c                                                                                         4 minutes ago        Running             etcd                      3                   73d23719d576c       etcd-ha-572000
	490b99a8cd7e0       56ce0fd9fb532                                                                                         4 minutes ago        Running             kube-apiserver            6                   43743c72743dc       kube-apiserver-ha-572000
	caed8fc7c24d9       e874818b3caac                                                                                         4 minutes ago        Exited              kube-controller-manager   5                   3143df977771c       kube-controller-manager-ha-572000
	cd333393aa057       56ce0fd9fb532                                                                                         4 minutes ago        Exited              kube-apiserver            5                   6d7eb0e874999       kube-apiserver-ha-572000
	b6b4ce34842d6       3861cfcd7c04c                                                                                         4 minutes ago        Exited              etcd                      2                   986ceb5a6f870       etcd-ha-572000
	138bf6784d59c       38af8ddebf499                                                                                         8 minutes ago        Exited              kube-vip                  0                   df04438a4c5cc       kube-vip-ha-572000
	a53f8fcdf5d97       7820c83aa1394                                                                                         8 minutes ago        Exited              kube-scheduler            1                   bfd880612991e       kube-scheduler-ha-572000
	e1a5eb1bed550       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago       Exited              busybox                   0                   29ab413131af2       busybox-fc5497c4f-5r4wl
	bb44d784bb7ab       cbb01a7bd410d                                                                                         13 minutes ago       Exited              coredns                   0                   b8c622f08395f       coredns-7db6d8ff4d-2phrp
	7b275812468c9       cbb01a7bd410d                                                                                         13 minutes ago       Exited              coredns                   0                   2588bd7c40c23       coredns-7db6d8ff4d-9dzd5
	6e40e1427ab20       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              14 minutes ago       Exited              kindnet-cni               0                   32dc836c0a2df       kindnet-t85bv
	2aeed19835352       53c535741fb44                                                                                         14 minutes ago       Exited              kube-proxy                0                   f688e08d591be       kube-proxy-hst7h
	
	
	==> coredns [0544a7b38aa2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47730 - 44649 "HINFO IN 7657991150461714427.6847867729784937660. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009507113s
	
	
	==> coredns [22dbe2e88f6f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50584 - 51756 "HINFO IN 3888167032918365436.646455749640363721. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.007934252s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1469986290]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 17:38:43.514) (total time: 30002ms):
	Trace[1469986290]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:39:13.516)
	Trace[1469986290]: [30.002760682s] [30.002760682s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1457962466]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 17:38:43.515) (total time: 30001ms):
	Trace[1457962466]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:39:13.516)
	Trace[1457962466]: [30.001713432s] [30.001713432s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[94258701]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 17:38:43.514) (total time: 30003ms):
	Trace[94258701]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:39:13.516)
	Trace[94258701]: [30.003582814s] [30.003582814s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [7b275812468c] <==
	[INFO] 10.244.0.4:49035 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001628883s
	[INFO] 10.244.0.4:59665 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081818s
	[INFO] 10.244.0.4:52274 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077378s
	[INFO] 10.244.1.2:47113 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103907s
	[INFO] 10.244.2.2:57796 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060561s
	[INFO] 10.244.2.2:40339 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000425199s
	[INFO] 10.244.2.2:38339 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010468s
	[INFO] 10.244.2.2:50750 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070547s
	[INFO] 10.244.0.4:57426 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000045039s
	[INFO] 10.244.0.4:44283 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031195s
	[INFO] 10.244.1.2:56783 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117359s
	[INFO] 10.244.1.2:34315 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061358s
	[INFO] 10.244.1.2:55792 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062223s
	[INFO] 10.244.2.2:35768 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086426s
	[INFO] 10.244.2.2:42473 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102022s
	[INFO] 10.244.0.4:53524 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000087177s
	[INFO] 10.244.0.4:35224 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079092s
	[INFO] 10.244.0.4:53020 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075585s
	[INFO] 10.244.1.2:58074 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138293s
	[INFO] 10.244.1.2:40567 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000088337s
	[INFO] 10.244.1.2:46301 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000065328s
	[INFO] 10.244.1.2:59898 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075157s
	[INFO] 10.244.2.2:48754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000081888s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bb44d784bb7a] <==
	[INFO] 10.244.0.4:54052 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001019206s
	[INFO] 10.244.0.4:45412 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077905s
	[INFO] 10.244.0.4:37924 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159382s
	[INFO] 10.244.1.2:45862 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108196s
	[INFO] 10.244.1.2:47464 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000616154s
	[INFO] 10.244.1.2:53720 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059338s
	[INFO] 10.244.1.2:49774 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123053s
	[INFO] 10.244.1.2:54472 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000097287s
	[INFO] 10.244.1.2:52054 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064095s
	[INFO] 10.244.1.2:54428 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064142s
	[INFO] 10.244.2.2:53088 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109393s
	[INFO] 10.244.2.2:49993 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000114279s
	[INFO] 10.244.2.2:42708 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063661s
	[INFO] 10.244.2.2:57150 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000043434s
	[INFO] 10.244.0.4:45987 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112397s
	[INFO] 10.244.0.4:44116 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057797s
	[INFO] 10.244.1.2:37635 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105428s
	[INFO] 10.244.2.2:52081 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131248s
	[INFO] 10.244.2.2:54912 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087858s
	[INFO] 10.244.0.4:53981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079367s
	[INFO] 10.244.2.2:45850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152036s
	[INFO] 10.244.2.2:36004 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000223s
	[INFO] 10.244.2.2:54459 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000128834s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-572000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-572000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-572000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T10_27_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:27:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-572000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:41:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:38:16 +0000   Wed, 17 Jul 2024 17:27:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:38:16 +0000   Wed, 17 Jul 2024 17:27:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:38:16 +0000   Wed, 17 Jul 2024 17:27:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:38:16 +0000   Wed, 17 Jul 2024 17:27:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-572000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc4828ff3a4b410d87d0a2c48b8c546d
	  System UUID:                5f264258-0000-0000-9840-7856c1bd4173
	  Boot ID:                    2568bff2-eded-45b6-850c-4c0e9d36f966
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5r4wl              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-2phrp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-9dzd5             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-572000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-t85bv                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-572000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-572000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-hst7h                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-572000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-572000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m4s                   kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node ha-572000 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node ha-572000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node ha-572000 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           14m                    node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  NodeReady                13m                    kubelet          Node ha-572000 status is now: NodeReady
	  Normal  RegisteredNode           12m                    node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  RegisteredNode           9m42s                  node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  Starting                 4m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m10s)  kubelet          Node ha-572000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m10s)  kubelet          Node ha-572000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m10s)  kubelet          Node ha-572000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m29s                  node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  RegisteredNode           3m8s                   node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  RegisteredNode           3m2s                   node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	  Normal  RegisteredNode           18s                    node-controller  Node ha-572000 event: Registered Node ha-572000 in Controller
	
	
	Name:               ha-572000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-572000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-572000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T10_28_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:28:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-572000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:41:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:38:08 +0000   Wed, 17 Jul 2024 17:28:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:38:08 +0000   Wed, 17 Jul 2024 17:28:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:38:08 +0000   Wed, 17 Jul 2024 17:28:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:38:08 +0000   Wed, 17 Jul 2024 17:28:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-572000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 21a94638d6914aaeb48a6d7a895c9b99
	  System UUID:                b5da4916-0000-0000-aec8-9a96c30c8c05
	  Boot ID:                    d3f575b3-f9f0-45ee-bee7-6209fb3d26a4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9sdw5                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-572000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-g2m92                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-572000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-572000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-v6jxh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-572000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-572000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m34s                  kube-proxy       
	  Normal   Starting                 9m55s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-572000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-572000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-572000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Warning  Rebooted                 9m58s                  kubelet          Node ha-572000-m02 has been rebooted, boot id: 7661c0d0-1379-4b0e-b101-3961fae1a207
	  Normal   NodeHasSufficientPID     9m58s                  kubelet          Node ha-572000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    9m58s                  kubelet          Node ha-572000-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 9m58s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m58s                  kubelet          Node ha-572000-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           9m42s                  node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   Starting                 3m52s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m51s (x8 over 3m52s)  kubelet          Node ha-572000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m51s (x8 over 3m52s)  kubelet          Node ha-572000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m51s (x7 over 3m52s)  kubelet          Node ha-572000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m29s                  node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   RegisteredNode           3m8s                   node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   RegisteredNode           3m2s                   node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	  Normal   RegisteredNode           18s                    node-controller  Node ha-572000-m02 event: Registered Node ha-572000-m02 in Controller
	
	
	Name:               ha-572000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-572000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-572000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T10_29_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:29:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-572000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:41:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:38:31 +0000   Wed, 17 Jul 2024 17:29:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:38:31 +0000   Wed, 17 Jul 2024 17:29:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:38:31 +0000   Wed, 17 Jul 2024 17:29:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:38:31 +0000   Wed, 17 Jul 2024 17:30:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-572000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 be52acddd53148cc8c17d6c21c17abf3
	  System UUID:                50644be4-0000-0000-8d75-15b09204e5f5
	  Boot ID:                    f2b4f1ba-89fc-4cd2-a163-0306f2bae7bb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jhz2d                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-572000-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-72zfp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-572000-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-572000-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-h7k9z                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-572000-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-572000-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 3m14s              kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node ha-572000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node ha-572000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node ha-572000-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           9m42s              node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           3m29s              node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   NodeHasSufficientMemory  3m18s              kubelet          Node ha-572000-m03 status is now: NodeHasSufficientMemory
	  Normal   Starting                 3m18s              kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3m18s              kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    3m18s              kubelet          Node ha-572000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m18s              kubelet          Node ha-572000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 3m18s              kubelet          Node ha-572000-m03 has been rebooted, boot id: f2b4f1ba-89fc-4cd2-a163-0306f2bae7bb
	  Normal   RegisteredNode           3m8s               node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           3m2s               node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	  Normal   RegisteredNode           18s                node-controller  Node ha-572000-m03 event: Registered Node ha-572000-m03 in Controller
	
	
	Name:               ha-572000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-572000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-572000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T10_30_40_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:30:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-572000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:41:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:40:07 +0000   Wed, 17 Jul 2024 17:40:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:40:07 +0000   Wed, 17 Jul 2024 17:40:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:40:07 +0000   Wed, 17 Jul 2024 17:40:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:40:07 +0000   Wed, 17 Jul 2024 17:40:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-572000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a064c491460940e4967dc27f529a5ea6
	  System UUID:                d62b4091-0000-0000-a1f9-ae55052b3d93
	  Boot ID:                    9c875bb7-4ccf-49df-b662-ce64a8634436
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5xsrp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-proxy-5wcph    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 100s                 kube-proxy       
	  Normal   Starting                 11m                  kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)    kubelet          Node ha-572000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)    kubelet          Node ha-572000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)    kubelet          Node ha-572000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                  node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   RegisteredNode           11m                  node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   RegisteredNode           11m                  node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   NodeReady                10m                  kubelet          Node ha-572000-m04 status is now: NodeReady
	  Normal   RegisteredNode           9m42s                node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   RegisteredNode           3m29s                node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   RegisteredNode           3m8s                 node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   RegisteredNode           3m2s                 node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	  Normal   NodeNotReady             2m49s                node-controller  Node ha-572000-m04 status is now: NodeNotReady
	  Normal   Starting                 102s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  102s (x2 over 102s)  kubelet          Node ha-572000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    102s (x2 over 102s)  kubelet          Node ha-572000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     102s (x2 over 102s)  kubelet          Node ha-572000-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 102s                 kubelet          Node ha-572000-m04 has been rebooted, boot id: 9c875bb7-4ccf-49df-b662-ce64a8634436
	  Normal   NodeReady                102s                 kubelet          Node ha-572000-m04 status is now: NodeReady
	  Normal   RegisteredNode           18s                  node-controller  Node ha-572000-m04 event: Registered Node ha-572000-m04 in Controller
	
	
	Name:               ha-572000-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-572000-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-572000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T10_41_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:41:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-572000-m05
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:41:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:41:44 +0000   Wed, 17 Jul 2024 17:41:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:41:44 +0000   Wed, 17 Jul 2024 17:41:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:41:44 +0000   Wed, 17 Jul 2024 17:41:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:41:44 +0000   Wed, 17 Jul 2024 17:41:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.9
	  Hostname:    ha-572000-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 767b66bba81748e696eb2e462f5f7060
	  System UUID:                56c3461c-0000-0000-b26f-1b2c0afb03b4
	  Boot ID:                    39c772b6-19a0-4d6d-b5e1-f52a71880d81
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-572000-m05                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         33s
	  kube-system                 kindnet-dpf85                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      35s
	  kube-system                 kube-apiserver-ha-572000-m05             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 kube-controller-manager-ha-572000-m05    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 kube-proxy-64xjf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                 kube-scheduler-ha-572000-m05             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 kube-vip-ha-572000-m05                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 31s                kube-proxy       
	  Normal  NodeHasSufficientMemory  35s (x8 over 36s)  kubelet          Node ha-572000-m05 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 36s)  kubelet          Node ha-572000-m05 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x7 over 36s)  kubelet          Node ha-572000-m05 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  35s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           34s                node-controller  Node ha-572000-m05 event: Registered Node ha-572000-m05 in Controller
	  Normal  RegisteredNode           33s                node-controller  Node ha-572000-m05 event: Registered Node ha-572000-m05 in Controller
	  Normal  RegisteredNode           32s                node-controller  Node ha-572000-m05 event: Registered Node ha-572000-m05 in Controller
	  Normal  RegisteredNode           18s                node-controller  Node ha-572000-m05 event: Registered Node ha-572000-m05 in Controller
	
	
	==> dmesg <==
	[  +0.035701] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007982] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.369068] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006691] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.635959] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.223787] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.844039] systemd-fstab-generator[468]: Ignoring "noauto" option for root device
	[  +0.100018] systemd-fstab-generator[480]: Ignoring "noauto" option for root device
	[  +1.895052] systemd-fstab-generator[1104]: Ignoring "noauto" option for root device
	[  +0.053692] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.194931] systemd-fstab-generator[1141]: Ignoring "noauto" option for root device
	[  +0.116874] systemd-fstab-generator[1153]: Ignoring "noauto" option for root device
	[  +0.104796] systemd-fstab-generator[1167]: Ignoring "noauto" option for root device
	[  +2.435008] systemd-fstab-generator[1384]: Ignoring "noauto" option for root device
	[  +0.114297] systemd-fstab-generator[1396]: Ignoring "noauto" option for root device
	[  +0.106280] systemd-fstab-generator[1408]: Ignoring "noauto" option for root device
	[  +0.119247] systemd-fstab-generator[1423]: Ignoring "noauto" option for root device
	[  +0.407183] systemd-fstab-generator[1582]: Ignoring "noauto" option for root device
	[  +6.782353] kauditd_printk_skb: 234 callbacks suppressed
	[Jul17 17:38] kauditd_printk_skb: 40 callbacks suppressed
	[ +35.726193] kauditd_printk_skb: 25 callbacks suppressed
	[Jul17 17:39] kauditd_printk_skb: 50 callbacks suppressed
	
	
	==> etcd [b6b4ce34842d] <==
	{"level":"info","ts":"2024-07-17T17:37:06.183089Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-07-17T17:37:07.625159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:07.625381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:07.625508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:07.625789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:07.626021Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:09.125694Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:09.125798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:09.125845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:09.12599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:09.12619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:10.625744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:10.62582Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:10.625858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:10.625915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:10.625982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	{"level":"warn","ts":"2024-07-17T17:37:11.167194Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f6a6d94fe6ea5bc8","rtt":"0s"}
	{"level":"warn","ts":"2024-07-17T17:37:11.167486Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f6a6d94fe6ea5bc8","rtt":"0s"}
	{"level":"warn","ts":"2024-07-17T17:37:11.185338Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1d3f36ee75516151","rtt":"0s"}
	{"level":"warn","ts":"2024-07-17T17:37:11.185403Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1d3f36ee75516151","rtt":"0s"}
	{"level":"info","ts":"2024-07-17T17:37:12.128113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:12.128317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:12.128469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:12.128888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
	{"level":"info","ts":"2024-07-17T17:37:12.129376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
	
	
	==> etcd [c4604d37a945] <==
	{"level":"warn","ts":"2024-07-17T17:41:14.324016Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.169.0.9:44238","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-07-17T17:41:14.334681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(2107463548431065425 13314548521573537860 17773131916664003528) learners=(16006101081352431403)"}
	{"level":"info","ts":"2024-07-17T17:41:14.335365Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","added-peer-id":"de2118212901e72b","added-peer-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"info","ts":"2024-07-17T17:41:14.335913Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:14.336217Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:14.337301Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:14.338676Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:14.34151Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:14.342462Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"de2118212901e72b","remote-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"info","ts":"2024-07-17T17:41:14.341846Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:14.34175Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"de2118212901e72b"}
	{"level":"warn","ts":"2024-07-17T17:41:14.37406Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.169.0.9:44278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-07-17T17:41:14.393112Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"de2118212901e72b","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-07-17T17:41:14.886078Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"de2118212901e72b","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-07-17T17:41:15.391819Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"de2118212901e72b","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-07-17T17:41:15.441542Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:15.453858Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:15.456472Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:15.488031Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"de2118212901e72b","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-17T17:41:15.488346Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:15.488889Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"de2118212901e72b","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-17T17:41:15.488928Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"de2118212901e72b"}
	{"level":"info","ts":"2024-07-17T17:41:16.392768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(2107463548431065425 13314548521573537860 16006101081352431403 17773131916664003528)"}
	{"level":"info","ts":"2024-07-17T17:41:16.393348Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-07-17T17:41:16.393877Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"de2118212901e72b"}
	
	
	==> kernel <==
	 17:41:49 up 4 min,  0 users,  load average: 0.08, 0.08, 0.03
	Linux ha-572000 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6e40e1427ab2] <==
	I0717 17:31:56.892269       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:06.898213       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:06.898253       1 main.go:303] handling current node
	I0717 17:32:06.898264       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:06.898269       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:06.898416       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:06.898443       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:06.898526       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:06.898555       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:16.896377       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:16.896415       1 main.go:303] handling current node
	I0717 17:32:16.896426       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:16.896432       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:16.896606       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:16.896636       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:16.896674       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:16.896699       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:32:26.896557       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:32:26.896622       1 main.go:303] handling current node
	I0717 17:32:26.896678       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:32:26.896718       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:32:26.896938       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:32:26.897017       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:32:26.897158       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:32:26.897880       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [90d12ecf2a20] <==
	I0717 17:41:25.428248       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:41:25.428422       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:41:25.428533       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:41:25.428768       1 main.go:299] Handling node with IPs: map[192.169.0.9:{}]
	I0717 17:41:25.428834       1 main.go:326] Node ha-572000-m05 has CIDR [10.244.4.0/24] 
	I0717 17:41:35.427111       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:41:35.427150       1 main.go:303] handling current node
	I0717 17:41:35.427162       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:41:35.427167       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:41:35.427382       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:41:35.427413       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:41:35.427460       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:41:35.427464       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:41:35.427495       1 main.go:299] Handling node with IPs: map[192.169.0.9:{}]
	I0717 17:41:35.427499       1 main.go:326] Node ha-572000-m05 has CIDR [10.244.4.0/24] 
	I0717 17:41:45.427214       1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
	I0717 17:41:45.427236       1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:41:45.427314       1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
	I0717 17:41:45.427320       1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24] 
	I0717 17:41:45.427357       1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
	I0717 17:41:45.427362       1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24] 
	I0717 17:41:45.427391       1 main.go:299] Handling node with IPs: map[192.169.0.9:{}]
	I0717 17:41:45.427395       1 main.go:326] Node ha-572000-m05 has CIDR [10.244.4.0/24] 
	I0717 17:41:45.427423       1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
	I0717 17:41:45.427428       1 main.go:303] handling current node
	
	
	==> kube-apiserver [490b99a8cd7e] <==
	I0717 17:38:06.692598       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0717 17:38:06.695172       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 17:38:06.753691       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 17:38:06.754495       1 policy_source.go:224] refreshing policies
	I0717 17:38:06.761461       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 17:38:06.775946       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 17:38:06.777937       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 17:38:06.777967       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 17:38:06.785861       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 17:38:06.785861       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 17:38:06.789965       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 17:38:06.785881       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 17:38:06.790098       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 17:38:06.790136       1 aggregator.go:165] initial CRD sync complete...
	I0717 17:38:06.790141       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 17:38:06.790145       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 17:38:06.790148       1 cache.go:39] Caches are synced for autoregister controller
	W0717 17:38:06.822673       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.7]
	I0717 17:38:06.824170       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 17:38:06.847080       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 17:38:06.894480       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0717 17:38:06.899931       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0717 17:38:07.685599       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0717 17:38:07.910228       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.7]
	W0717 17:38:27.915985       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.6]
	
	
	==> kube-apiserver [cd333393aa05] <==
	I0717 17:37:11.795742       1 options.go:221] external host was not specified, using 192.169.0.5
	I0717 17:37:11.796641       1 server.go:148] Version: v1.30.2
	I0717 17:37:11.796774       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:37:12.098000       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0717 17:37:12.100463       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 17:37:12.102906       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0717 17:37:12.102927       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0717 17:37:12.103040       1 instance.go:299] Using reconciler: lease
	W0717 17:37:13.058091       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:59336->127.0.0.1:2379: read: connection reset by peer"
	W0717 17:37:13.058287       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:59310->127.0.0.1:2379: read: connection reset by peer"
	W0717 17:37:13.058569       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:59320->127.0.0.1:2379: read: connection reset by peer"
	
	
	==> kube-controller-manager [caed8fc7c24d] <==
	I0717 17:37:47.127601       1 serving.go:380] Generated self-signed cert in-memory
	I0717 17:37:47.646900       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0717 17:37:47.646935       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:37:47.649809       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0717 17:37:47.649838       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 17:37:47.650220       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 17:37:47.649847       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0717 17:38:07.655360       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-n
amespaces-controller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [d0c5e4f0005b] <==
	I0717 17:38:41.432004       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0717 17:38:41.511531       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 17:38:41.518940       1 shared_informer.go:320] Caches are synced for daemon sets
	I0717 17:38:41.541830       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 17:38:41.550619       1 shared_informer.go:320] Caches are synced for stateful set
	I0717 17:38:41.975157       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 17:38:41.982462       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 17:38:41.982520       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 17:38:43.635302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.818µs"
	I0717 17:38:44.733712       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.810534ms"
	I0717 17:38:44.734043       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.445µs"
	I0717 17:38:45.721419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.76µs"
	I0717 17:38:45.768611       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-9v69m EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-9v69m\": the object has been modified; please apply your changes to the latest version and try again"
	I0717 17:38:45.771754       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"7c540b68-a08e-44ac-9c69-ea596263c8eb", APIVersion:"v1", ResourceVersion:"260", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-9v69m EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-9v69m": the object has been modified; please apply your changes to the latest version and try again
	I0717 17:38:45.781131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.861246ms"
	I0717 17:38:45.781831       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.528µs"
	I0717 17:39:19.551280       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.494894ms"
	I0717 17:39:19.551568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="81.124µs"
	I0717 17:40:07.684329       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-572000-m04"
	E0717 17:41:14.082163       1 certificate_controller.go:146] Sync csr-qbxdh failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-qbxdh": the object has been modified; please apply your changes to the latest version and try again
	I0717 17:41:14.172914       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-572000-m04"
	I0717 17:41:14.175471       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-572000-m05\" does not exist"
	I0717 17:41:14.194311       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-572000-m05" podCIDRs=["10.244.4.0/24"]
	I0717 17:41:16.399973       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-572000-m05"
	I0717 17:41:33.445418       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-572000-m04"
	
	
	==> kube-proxy [2aeed1983535] <==
	I0717 17:27:43.315695       1 server_linux.go:69] "Using iptables proxy"
	I0717 17:27:43.322673       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0717 17:27:43.354011       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:27:43.354032       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:27:43.354044       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:27:43.355997       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:27:43.356216       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:27:43.356225       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:27:43.356874       1 config.go:192] "Starting service config controller"
	I0717 17:27:43.356903       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:27:43.356921       1 config.go:319] "Starting node config controller"
	I0717 17:27:43.356943       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:27:43.357077       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:27:43.357144       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:27:43.457513       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 17:27:43.457607       1 shared_informer.go:320] Caches are synced for node config
	I0717 17:27:43.457639       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [2f15e40a181a] <==
	I0717 17:38:44.762819       1 server_linux.go:69] "Using iptables proxy"
	I0717 17:38:44.783856       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0717 17:38:44.830838       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:38:44.830870       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:38:44.830884       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:38:44.834309       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:38:44.834864       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:38:44.834894       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:38:44.836964       1 config.go:192] "Starting service config controller"
	I0717 17:38:44.837593       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:38:44.837672       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:38:44.837678       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:38:44.839841       1 config.go:319] "Starting node config controller"
	I0717 17:38:44.839870       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:38:44.938549       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 17:38:44.938751       1 shared_informer.go:320] Caches are synced for service config
	I0717 17:38:44.940510       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a53f8fcdf5d9] <==
	E0717 17:36:41.264926       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:36:42.998657       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:36:42.998862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:36:43.326673       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:36:43.327166       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:36:45.184656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:36:45.185412       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:36:52.182490       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:36:52.182723       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:37:00.423142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:00.423274       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:37:01.259659       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:01.260400       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:37:02.377758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:02.378082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:37:08.932628       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:08.932761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	W0717 17:37:09.428412       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:09.428505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0717 17:37:13.065507       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0717 17:37:13.067197       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0717 17:37:13.067371       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0717 17:37:13.067559       1 shared_informer.go:316] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 17:37:13.067604       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0717 17:37:13.067950       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b589feb3cd96] <==
	I0717 17:38:06.820740       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 17:41:14.239866       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vlsjj\": pod kindnet-vlsjj is already assigned to node \"ha-572000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-vlsjj" node="ha-572000-m05"
	E0717 17:41:14.239947       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bf06a8b7-5c37-4959-8a51-d0be5c50ba7a(kube-system/kindnet-vlsjj) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-vlsjj"
	E0717 17:41:14.239965       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vlsjj\": pod kindnet-vlsjj is already assigned to node \"ha-572000-m05\"" pod="kube-system/kindnet-vlsjj"
	I0717 17:41:14.239980       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-vlsjj" node="ha-572000-m05"
	E0717 17:41:14.239465       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rdwjx\": pod kube-proxy-rdwjx is already assigned to node \"ha-572000-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rdwjx" node="ha-572000-m05"
	E0717 17:41:14.242527       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 949ed56b-85a2-4195-852b-78dc4bc5b578(kube-system/kube-proxy-rdwjx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-rdwjx"
	E0717 17:41:14.249558       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rdwjx\": pod kube-proxy-rdwjx is already assigned to node \"ha-572000-m05\"" pod="kube-system/kube-proxy-rdwjx"
	I0717 17:41:14.249616       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rdwjx" node="ha-572000-m05"
	E0717 17:41:14.250217       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-p29wl\": pod kindnet-p29wl is already assigned to node \"ha-572000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-p29wl" node="ha-572000-m05"
	E0717 17:41:14.252306       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-pzgmx\": pod kube-proxy-pzgmx is already assigned to node \"ha-572000-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-pzgmx" node="ha-572000-m05"
	E0717 17:41:14.252572       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bcc2c71d-a309-4576-9860-6418f0a2067d(kube-system/kube-proxy-pzgmx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-pzgmx"
	E0717 17:41:14.252721       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-pzgmx\": pod kube-proxy-pzgmx is already assigned to node \"ha-572000-m05\"" pod="kube-system/kube-proxy-pzgmx"
	I0717 17:41:14.252854       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-pzgmx" node="ha-572000-m05"
	E0717 17:41:14.253203       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dpf85\": pod kindnet-dpf85 is already assigned to node \"ha-572000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-dpf85" node="ha-572000-m05"
	E0717 17:41:14.253408       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod eca95df0-0ef0-44a6-b5de-bc7d469e569b(kube-system/kindnet-dpf85) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-dpf85"
	E0717 17:41:14.253544       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dpf85\": pod kindnet-dpf85 is already assigned to node \"ha-572000-m05\"" pod="kube-system/kindnet-dpf85"
	I0717 17:41:14.253603       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dpf85" node="ha-572000-m05"
	E0717 17:41:14.250279       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 141e26ae-90e6-472e-8b26-fd21a5c88874(kube-system/kindnet-p29wl) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-p29wl"
	E0717 17:41:14.255824       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-p29wl\": pod kindnet-p29wl is already assigned to node \"ha-572000-m05\"" pod="kube-system/kindnet-p29wl"
	I0717 17:41:14.256117       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-p29wl" node="ha-572000-m05"
	E0717 17:41:14.270789       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-64xjf\": pod kube-proxy-64xjf is already assigned to node \"ha-572000-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-64xjf" node="ha-572000-m05"
	E0717 17:41:14.270845       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 40af5b55-9491-4221-9191-3d411d01d3a8(kube-system/kube-proxy-64xjf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-64xjf"
	E0717 17:41:14.270858       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-64xjf\": pod kube-proxy-64xjf is already assigned to node \"ha-572000-m05\"" pod="kube-system/kube-proxy-64xjf"
	I0717 17:41:14.270885       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-64xjf" node="ha-572000-m05"
	
	
	==> kubelet <==
	Jul 17 17:39:28 ha-572000 kubelet[1589]: E0717 17:39:28.248343    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	Jul 17 17:39:39 ha-572000 kubelet[1589]: E0717 17:39:39.270524    1589 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:39:39 ha-572000 kubelet[1589]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:39:39 ha-572000 kubelet[1589]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:39:39 ha-572000 kubelet[1589]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:39:39 ha-572000 kubelet[1589]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:39:43 ha-572000 kubelet[1589]: I0717 17:39:43.248697    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:39:43 ha-572000 kubelet[1589]: E0717 17:39:43.249374    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	Jul 17 17:39:54 ha-572000 kubelet[1589]: I0717 17:39:54.247534    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:39:54 ha-572000 kubelet[1589]: E0717 17:39:54.248369    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	Jul 17 17:40:07 ha-572000 kubelet[1589]: I0717 17:40:07.247771    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:40:07 ha-572000 kubelet[1589]: E0717 17:40:07.248147    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	Jul 17 17:40:22 ha-572000 kubelet[1589]: I0717 17:40:22.247319    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:40:22 ha-572000 kubelet[1589]: E0717 17:40:22.247457    1589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1801216d-6529-4049-b874-1577132fc03f)\"" pod="kube-system/storage-provisioner" podUID="1801216d-6529-4049-b874-1577132fc03f"
	Jul 17 17:40:35 ha-572000 kubelet[1589]: I0717 17:40:35.248729    1589 scope.go:117] "RemoveContainer" containerID="a82cf6255e5a9038e6b9f99c7c61bb553ecd5668aef41bf2bd07606b44772403"
	Jul 17 17:40:39 ha-572000 kubelet[1589]: E0717 17:40:39.271602    1589 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:40:39 ha-572000 kubelet[1589]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:40:39 ha-572000 kubelet[1589]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:40:39 ha-572000 kubelet[1589]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:40:39 ha-572000 kubelet[1589]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:41:39 ha-572000 kubelet[1589]: E0717 17:41:39.273266    1589 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:41:39 ha-572000 kubelet[1589]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:41:39 ha-572000 kubelet[1589]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:41:39 ha-572000 kubelet[1589]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:41:39 ha-572000 kubelet[1589]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-572000 -n ha-572000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-572000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.74s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (80.08s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-412000 --driver=hyperkit 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p image-412000 --driver=hyperkit : exit status 90 (1m19.926503672s)

                                                
                                                
-- stdout --
	* [image-412000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "image-412000" primary control-plane node in "image-412000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 17:42:35 image-412000 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 17:42:35 image-412000 dockerd[525]: time="2024-07-17T17:42:35.277896163Z" level=info msg="Starting up"
	Jul 17 17:42:35 image-412000 dockerd[525]: time="2024-07-17T17:42:35.278403670Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 17:42:35 image-412000 dockerd[525]: time="2024-07-17T17:42:35.278950982Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=531
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.294388313Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.309959312Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.310008738Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.310056878Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.310069088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.310133736Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.310166400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.310353193Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.310391063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.310403502Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.310410569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.310469682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.310641105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.312259953Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.312299175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.312423634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.312458245Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.312527046Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.312588743Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.702840534Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.703032134Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.703081121Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.703119223Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.703152783Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.703261071Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.703483703Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.703598361Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.703638545Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.703670016Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.703705868Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.703736846Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.703766100Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.703796299Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.703827583Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.703857979Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.703895466Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.703973095Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704015480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704048964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704079668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704110147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704144255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704174769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704208182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704239599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704275163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704309268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704338439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704367721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704396594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704427671Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704463550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704494540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704530822Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704610655Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704654362Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704684869Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704714423Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704742519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704771710Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.704800323Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.705125449Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.705192998Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.705254085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 17:42:35 image-412000 dockerd[531]: time="2024-07-17T17:42:35.705457691Z" level=info msg="containerd successfully booted in 0.411820s"
	Jul 17 17:42:36 image-412000 dockerd[525]: time="2024-07-17T17:42:36.303416338Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 17:42:36 image-412000 dockerd[525]: time="2024-07-17T17:42:36.308436905Z" level=info msg="Loading containers: start."
	Jul 17 17:42:36 image-412000 dockerd[525]: time="2024-07-17T17:42:36.397311692Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 17:42:36 image-412000 dockerd[525]: time="2024-07-17T17:42:36.489704491Z" level=info msg="Loading containers: done."
	Jul 17 17:42:36 image-412000 dockerd[525]: time="2024-07-17T17:42:36.505411122Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 17:42:36 image-412000 dockerd[525]: time="2024-07-17T17:42:36.505533201Z" level=info msg="Daemon has completed initialization"
	Jul 17 17:42:36 image-412000 dockerd[525]: time="2024-07-17T17:42:36.531286444Z" level=info msg="API listen on [::]:2376"
	Jul 17 17:42:36 image-412000 systemd[1]: Started Docker Application Container Engine.
	Jul 17 17:42:36 image-412000 dockerd[525]: time="2024-07-17T17:42:36.532312350Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 17:42:37 image-412000 dockerd[525]: time="2024-07-17T17:42:37.487606777Z" level=info msg="Processing signal 'terminated'"
	Jul 17 17:42:37 image-412000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 17:42:37 image-412000 dockerd[525]: time="2024-07-17T17:42:37.488821740Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 17:42:37 image-412000 dockerd[525]: time="2024-07-17T17:42:37.489044809Z" level=info msg="Daemon shutdown complete"
	Jul 17 17:42:37 image-412000 dockerd[525]: time="2024-07-17T17:42:37.489081467Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 17:42:37 image-412000 dockerd[525]: time="2024-07-17T17:42:37.489120438Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 17:42:38 image-412000 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 17:42:38 image-412000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 17:42:38 image-412000 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 17:42:38 image-412000 dockerd[930]: time="2024-07-17T17:42:38.530530152Z" level=info msg="Starting up"
	Jul 17 17:43:39 image-412000 dockerd[930]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 17:43:39 image-412000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 17:43:39 image-412000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 17:43:39 image-412000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-amd64 start -p image-412000 --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p image-412000 -n image-412000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p image-412000 -n image-412000: exit status 6 (151.451337ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 10:43:39.652550    3903 status.go:417] kubeconfig endpoint: get endpoint: "image-412000" does not appear in /Users/jenkins/minikube-integration/19283-1099/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "image-412000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestImageBuild/serial/Setup (80.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.45s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-093000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-093000 ssh -- mount | grep 9p
mount_start_test.go:127: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-1-093000 ssh -- mount | grep 9p: exit status 1 (125.590157ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
mount_start_test.go:129: failed to get mount information: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-093000 -n mount-start-1-093000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-093000 -n mount-start-1-093000: exit status 6 (151.550497ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 10:49:14.485893    4118 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-1-093000" does not appear in /Users/jenkins/minikube-integration/19283-1099/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-1-093000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountFirst (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (228.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-875000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-875000
E0717 10:52:51.212843    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-875000: (18.850660235s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-875000 --wait=true -v=8 --alsologtostderr
E0717 10:53:50.099872    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 10:54:48.168768    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-875000 --wait=true -v=8 --alsologtostderr: exit status 90 (3m25.533719238s)

                                                
                                                
-- stdout --
	* [multinode-875000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "multinode-875000" primary control-plane node in "multinode-875000" cluster
	* Restarting existing hyperkit VM for "multinode-875000" ...
	* Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-875000-m02" worker node in "multinode-875000" cluster
	* Restarting existing hyperkit VM for "multinode-875000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.15
	* Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	  - env NO_PROXY=192.169.0.15
	* Verifying Kubernetes components...
	
	* Starting "multinode-875000-m03" worker node in "multinode-875000" cluster
	* Restarting existing hyperkit VM for "multinode-875000-m03" ...
	* Found network options:
	  - NO_PROXY=192.169.0.15,192.169.0.16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:53:08.440682    4493 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:53:08.440954    4493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:53:08.440959    4493 out.go:304] Setting ErrFile to fd 2...
	I0717 10:53:08.440963    4493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:53:08.441129    4493 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	I0717 10:53:08.442482    4493 out.go:298] Setting JSON to false
	I0717 10:53:08.464365    4493 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3159,"bootTime":1721235629,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0717 10:53:08.464456    4493 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:53:08.486317    4493 out.go:177] * [multinode-875000] minikube v1.33.1 on Darwin 14.5
	I0717 10:53:08.528134    4493 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:53:08.528198    4493 notify.go:220] Checking for updates...
	I0717 10:53:08.571666    4493 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:53:08.593227    4493 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 10:53:08.614242    4493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:53:08.635073    4493 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	I0717 10:53:08.656076    4493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:53:08.677950    4493 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:53:08.678135    4493 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:53:08.678838    4493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:53:08.678911    4493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:53:08.688398    4493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53167
	I0717 10:53:08.688792    4493 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:53:08.689192    4493 main.go:141] libmachine: Using API Version  1
	I0717 10:53:08.689208    4493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:53:08.689454    4493 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:53:08.689593    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:53:08.719815    4493 out.go:177] * Using the hyperkit driver based on existing profile
	I0717 10:53:08.762231    4493 start.go:297] selected driver: hyperkit
	I0717 10:53:08.762256    4493 start.go:901] validating driver "hyperkit" against &{Name:multinode-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.2 ClusterName:multinode-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.15 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.17 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:53:08.762479    4493 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:53:08.762666    4493 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:53:08.762865    4493 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19283-1099/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0717 10:53:08.772341    4493 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0717 10:53:08.776095    4493 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:53:08.776116    4493 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0717 10:53:08.778717    4493 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:53:08.778754    4493 cni.go:84] Creating CNI manager for ""
	I0717 10:53:08.778762    4493 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 10:53:08.778833    4493 start.go:340] cluster config:
	{Name:multinode-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-875000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.15 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.17 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:53:08.778961    4493 iso.go:125] acquiring lock: {Name:mkf51f842bcc8a77e9c7c50d642c4c76848e96af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:53:08.821198    4493 out.go:177] * Starting "multinode-875000" primary control-plane node in "multinode-875000" cluster
	I0717 10:53:08.842213    4493 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:53:08.842307    4493 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0717 10:53:08.842336    4493 cache.go:56] Caching tarball of preloaded images
	I0717 10:53:08.842535    4493 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:53:08.842553    4493 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:53:08.842741    4493 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/config.json ...
	I0717 10:53:08.843662    4493 start.go:360] acquireMachinesLock for multinode-875000: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:53:08.843803    4493 start.go:364] duration metric: took 84.331µs to acquireMachinesLock for "multinode-875000"
	I0717 10:53:08.843854    4493 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:53:08.843874    4493 fix.go:54] fixHost starting: 
	I0717 10:53:08.844316    4493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:53:08.844355    4493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:53:08.853323    4493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53169
	I0717 10:53:08.853666    4493 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:53:08.854064    4493 main.go:141] libmachine: Using API Version  1
	I0717 10:53:08.854087    4493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:53:08.854307    4493 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:53:08.854421    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:53:08.854517    4493 main.go:141] libmachine: (multinode-875000) Calling .GetState
	I0717 10:53:08.854604    4493 main.go:141] libmachine: (multinode-875000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:53:08.854678    4493 main.go:141] libmachine: (multinode-875000) DBG | hyperkit pid from json: 4146
	I0717 10:53:08.855578    4493 main.go:141] libmachine: (multinode-875000) DBG | hyperkit pid 4146 missing from process table
	I0717 10:53:08.855605    4493 fix.go:112] recreateIfNeeded on multinode-875000: state=Stopped err=<nil>
	I0717 10:53:08.855625    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	W0717 10:53:08.855704    4493 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:53:08.876897    4493 out.go:177] * Restarting existing hyperkit VM for "multinode-875000" ...
	I0717 10:53:08.919115    4493 main.go:141] libmachine: (multinode-875000) Calling .Start
	I0717 10:53:08.919441    4493 main.go:141] libmachine: (multinode-875000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:53:08.919503    4493 main.go:141] libmachine: (multinode-875000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/hyperkit.pid
	I0717 10:53:08.921222    4493 main.go:141] libmachine: (multinode-875000) DBG | hyperkit pid 4146 missing from process table
	I0717 10:53:08.921259    4493 main.go:141] libmachine: (multinode-875000) DBG | pid 4146 is in state "Stopped"
	I0717 10:53:08.921271    4493 main.go:141] libmachine: (multinode-875000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/hyperkit.pid...
	I0717 10:53:08.921823    4493 main.go:141] libmachine: (multinode-875000) DBG | Using UUID 0b492f0d-cc97-495d-b943-8b478d8e6ab6
	I0717 10:53:09.032299    4493 main.go:141] libmachine: (multinode-875000) DBG | Generated MAC 92:c1:c6:6d:b5:4e
	I0717 10:53:09.032323    4493 main.go:141] libmachine: (multinode-875000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-875000
	I0717 10:53:09.032452    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0b492f0d-cc97-495d-b943-8b478d8e6ab6", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037b4a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0717 10:53:09.032486    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0b492f0d-cc97-495d-b943-8b478d8e6ab6", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037b4a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0717 10:53:09.032535    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "0b492f0d-cc97-495d-b943-8b478d8e6ab6", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/multinode-875000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/bzimage,/Users/jenkins/minikube-integration/1928
3-1099/.minikube/machines/multinode-875000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-875000"}
	I0717 10:53:09.032569    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 0b492f0d-cc97-495d-b943-8b478d8e6ab6 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/multinode-875000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-875000"
	I0717 10:53:09.032601    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:53:09.034110    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 DEBUG: hyperkit: Pid is 4506
	I0717 10:53:09.034778    4493 main.go:141] libmachine: (multinode-875000) DBG | Attempt 0
	I0717 10:53:09.034790    4493 main.go:141] libmachine: (multinode-875000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:53:09.034924    4493 main.go:141] libmachine: (multinode-875000) DBG | hyperkit pid from json: 4506
	I0717 10:53:09.036394    4493 main.go:141] libmachine: (multinode-875000) DBG | Searching for 92:c1:c6:6d:b5:4e in /var/db/dhcpd_leases ...
	I0717 10:53:09.036449    4493 main.go:141] libmachine: (multinode-875000) DBG | Found 16 entries in /var/db/dhcpd_leases!
	I0717 10:53:09.036479    4493 main.go:141] libmachine: (multinode-875000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:a2:dd:4c:c6:bd:14 ID:1,a2:dd:4c:c6:bd:14 Lease:0x669804f2}
	I0717 10:53:09.036501    4493 main.go:141] libmachine: (multinode-875000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:de:84:ef:f1:8f:c7 ID:1,de:84:ef:f1:8f:c7 Lease:0x669955e8}
	I0717 10:53:09.036514    4493 main.go:141] libmachine: (multinode-875000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:c1:c6:6d:b5:4e ID:1,92:c1:c6:6d:b5:4e Lease:0x669955a6}
	I0717 10:53:09.036529    4493 main.go:141] libmachine: (multinode-875000) DBG | Found match: 92:c1:c6:6d:b5:4e
	I0717 10:53:09.036559    4493 main.go:141] libmachine: (multinode-875000) DBG | IP: 192.169.0.15
	I0717 10:53:09.036578    4493 main.go:141] libmachine: (multinode-875000) Calling .GetConfigRaw
	I0717 10:53:09.037361    4493 main.go:141] libmachine: (multinode-875000) Calling .GetIP
	I0717 10:53:09.037542    4493 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/config.json ...
	I0717 10:53:09.037966    4493 machine.go:94] provisionDockerMachine start ...
	I0717 10:53:09.037977    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:53:09.038140    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:09.038269    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:09.038374    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:09.038504    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:09.038606    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:09.038734    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:53:09.038932    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0717 10:53:09.038940    4493 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:53:09.041588    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:53:09.095716    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:53:09.096409    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:53:09.096424    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:53:09.096432    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:53:09.096439    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:53:09.474480    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:53:09.474493    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:53:09.589608    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:53:09.589628    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:53:09.589640    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:53:09.589653    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:53:09.590554    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:53:09.590567    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:53:14.827254    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:14 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:53:14.827270    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:14 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:53:14.827324    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:14 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:53:14.851869    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:14 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:53:44.107008    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:53:44.107032    4493 main.go:141] libmachine: (multinode-875000) Calling .GetMachineName
	I0717 10:53:44.107184    4493 buildroot.go:166] provisioning hostname "multinode-875000"
	I0717 10:53:44.107196    4493 main.go:141] libmachine: (multinode-875000) Calling .GetMachineName
	I0717 10:53:44.107289    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:44.107373    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:44.107466    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.107551    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.107641    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:44.107772    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:53:44.107923    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0717 10:53:44.107931    4493 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-875000 && echo "multinode-875000" | sudo tee /etc/hostname
	I0717 10:53:44.174663    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-875000
	
	I0717 10:53:44.174681    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:44.174841    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:44.174935    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.175018    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.175114    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:44.175243    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:53:44.175395    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0717 10:53:44.175407    4493 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-875000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-875000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-875000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:53:44.239111    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:53:44.239133    4493 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:53:44.239156    4493 buildroot.go:174] setting up certificates
	I0717 10:53:44.239163    4493 provision.go:84] configureAuth start
	I0717 10:53:44.239170    4493 main.go:141] libmachine: (multinode-875000) Calling .GetMachineName
	I0717 10:53:44.239323    4493 main.go:141] libmachine: (multinode-875000) Calling .GetIP
	I0717 10:53:44.239447    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:44.239546    4493 provision.go:143] copyHostCerts
	I0717 10:53:44.239577    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:53:44.239663    4493 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:53:44.239671    4493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:53:44.239873    4493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:53:44.240100    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:53:44.240152    4493 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:53:44.240158    4493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:53:44.240239    4493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:53:44.240396    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:53:44.240438    4493 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:53:44.240443    4493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:53:44.240526    4493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:53:44.240673    4493 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.multinode-875000 san=[127.0.0.1 192.169.0.15 localhost minikube multinode-875000]
	I0717 10:53:44.434192    4493 provision.go:177] copyRemoteCerts
	I0717 10:53:44.434250    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:53:44.434269    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:44.434410    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:44.434519    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.434626    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:44.434715    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/id_rsa Username:docker}
	I0717 10:53:44.468908    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:53:44.468981    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:53:44.488630    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:53:44.488690    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0717 10:53:44.508446    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:53:44.508513    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 10:53:44.528227    4493 provision.go:87] duration metric: took 289.042237ms to configureAuth
	I0717 10:53:44.528241    4493 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:53:44.528415    4493 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:53:44.528429    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:53:44.528563    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:44.528658    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:44.528735    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.528813    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.528888    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:44.529001    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:53:44.529113    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0717 10:53:44.529120    4493 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:53:44.586725    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:53:44.586737    4493 buildroot.go:70] root file system type: tmpfs
	I0717 10:53:44.586812    4493 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:53:44.586825    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:44.586946    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:44.587041    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.587130    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.587206    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:44.587338    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:53:44.587473    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0717 10:53:44.587516    4493 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:53:44.657154    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:53:44.657176    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:44.657326    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:44.657422    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.657530    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.657627    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:44.657793    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:53:44.657950    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0717 10:53:44.657962    4493 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:53:46.286868    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:53:46.286885    4493 machine.go:97] duration metric: took 37.247877396s to provisionDockerMachine
	I0717 10:53:46.286899    4493 start.go:293] postStartSetup for "multinode-875000" (driver="hyperkit")
	I0717 10:53:46.286907    4493 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:53:46.286920    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:53:46.287106    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:53:46.287119    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:46.287234    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:46.287334    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:46.287432    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:46.287518    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/id_rsa Username:docker}
	I0717 10:53:46.323841    4493 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:53:46.326746    4493 command_runner.go:130] > NAME=Buildroot
	I0717 10:53:46.326765    4493 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0717 10:53:46.326770    4493 command_runner.go:130] > ID=buildroot
	I0717 10:53:46.326774    4493 command_runner.go:130] > VERSION_ID=2023.02.9
	I0717 10:53:46.326778    4493 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0717 10:53:46.326891    4493 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:53:46.326903    4493 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:53:46.327001    4493 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:53:46.327192    4493 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:53:46.327199    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:53:46.327412    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:53:46.335200    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:53:46.354322    4493 start.go:296] duration metric: took 67.412253ms for postStartSetup
	I0717 10:53:46.354346    4493 fix.go:56] duration metric: took 37.509442863s for fixHost
	I0717 10:53:46.354359    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:46.354492    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:46.354588    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:46.354663    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:46.354756    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:46.354873    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:53:46.355011    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0717 10:53:46.355018    4493 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 10:53:46.413735    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721238826.487514067
	
	I0717 10:53:46.413746    4493 fix.go:216] guest clock: 1721238826.487514067
	I0717 10:53:46.413751    4493 fix.go:229] Guest: 2024-07-17 10:53:46.487514067 -0700 PDT Remote: 2024-07-17 10:53:46.354349 -0700 PDT m=+37.949500651 (delta=133.165067ms)
	I0717 10:53:46.413777    4493 fix.go:200] guest clock delta is within tolerance: 133.165067ms
	I0717 10:53:46.413782    4493 start.go:83] releasing machines lock for "multinode-875000", held for 37.568918907s
	I0717 10:53:46.413799    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:53:46.413927    4493 main.go:141] libmachine: (multinode-875000) Calling .GetIP
	I0717 10:53:46.414023    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:53:46.414324    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:53:46.414437    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:53:46.414519    4493 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:53:46.414551    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:46.414592    4493 ssh_runner.go:195] Run: cat /version.json
	I0717 10:53:46.414604    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:46.414665    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:46.414712    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:46.414754    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:46.414804    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:46.414827    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:46.414899    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/id_rsa Username:docker}
	I0717 10:53:46.414916    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:46.415015    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/id_rsa Username:docker}
	I0717 10:53:46.446050    4493 command_runner.go:130] > {"iso_version": "v1.33.1-1721146474-19264", "kicbase_version": "v0.0.44-1721064868-19249", "minikube_version": "v1.33.1", "commit": "6e0d7ef26437c947028f356d4449a323918e966e"}
	I0717 10:53:46.446222    4493 ssh_runner.go:195] Run: systemctl --version
	I0717 10:53:46.495969    4493 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 10:53:46.497057    4493 command_runner.go:130] > systemd 252 (252)
	I0717 10:53:46.497112    4493 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0717 10:53:46.497244    4493 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:53:46.502202    4493 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 10:53:46.502226    4493 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:53:46.502268    4493 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:53:46.514783    4493 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0717 10:53:46.514802    4493 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:53:46.514814    4493 start.go:495] detecting cgroup driver to use...
	I0717 10:53:46.514919    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:53:46.529710    4493 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0717 10:53:46.529926    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:53:46.538945    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:53:46.547744    4493 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:53:46.547783    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:53:46.556835    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:53:46.565925    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:53:46.574800    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:53:46.583709    4493 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:53:46.592744    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:53:46.601366    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:53:46.610134    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:53:46.619067    4493 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:53:46.627080    4493 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 10:53:46.627236    4493 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:53:46.635244    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:53:46.730124    4493 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:53:46.744976    4493 start.go:495] detecting cgroup driver to use...
	I0717 10:53:46.745053    4493 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:53:46.755963    4493 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0717 10:53:46.756906    4493 command_runner.go:130] > [Unit]
	I0717 10:53:46.756917    4493 command_runner.go:130] > Description=Docker Application Container Engine
	I0717 10:53:46.756922    4493 command_runner.go:130] > Documentation=https://docs.docker.com
	I0717 10:53:46.756927    4493 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0717 10:53:46.756941    4493 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0717 10:53:46.756946    4493 command_runner.go:130] > StartLimitBurst=3
	I0717 10:53:46.756950    4493 command_runner.go:130] > StartLimitIntervalSec=60
	I0717 10:53:46.756954    4493 command_runner.go:130] > [Service]
	I0717 10:53:46.756957    4493 command_runner.go:130] > Type=notify
	I0717 10:53:46.756961    4493 command_runner.go:130] > Restart=on-failure
	I0717 10:53:46.756967    4493 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0717 10:53:46.756975    4493 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0717 10:53:46.756981    4493 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0717 10:53:46.756987    4493 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0717 10:53:46.756992    4493 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0717 10:53:46.756997    4493 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0717 10:53:46.757004    4493 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0717 10:53:46.757013    4493 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0717 10:53:46.757023    4493 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0717 10:53:46.757028    4493 command_runner.go:130] > ExecStart=
	I0717 10:53:46.757044    4493 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0717 10:53:46.757049    4493 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0717 10:53:46.757056    4493 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0717 10:53:46.757062    4493 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0717 10:53:46.757066    4493 command_runner.go:130] > LimitNOFILE=infinity
	I0717 10:53:46.757070    4493 command_runner.go:130] > LimitNPROC=infinity
	I0717 10:53:46.757074    4493 command_runner.go:130] > LimitCORE=infinity
	I0717 10:53:46.757078    4493 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0717 10:53:46.757083    4493 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0717 10:53:46.757087    4493 command_runner.go:130] > TasksMax=infinity
	I0717 10:53:46.757090    4493 command_runner.go:130] > TimeoutStartSec=0
	I0717 10:53:46.757095    4493 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0717 10:53:46.757100    4493 command_runner.go:130] > Delegate=yes
	I0717 10:53:46.757105    4493 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0717 10:53:46.757108    4493 command_runner.go:130] > KillMode=process
	I0717 10:53:46.757113    4493 command_runner.go:130] > [Install]
	I0717 10:53:46.757132    4493 command_runner.go:130] > WantedBy=multi-user.target
	I0717 10:53:46.757267    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:53:46.768312    4493 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:53:46.780908    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:53:46.792555    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:53:46.803455    4493 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:53:46.828999    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:53:46.841901    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:53:46.858496    4493 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0717 10:53:46.858783    4493 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:53:46.861485    4493 command_runner.go:130] > /usr/bin/cri-dockerd
	I0717 10:53:46.861702    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:53:46.868721    4493 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:53:46.882414    4493 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:53:46.980451    4493 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:53:47.086611    4493 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:53:47.086712    4493 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:53:47.101549    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:53:47.197870    4493 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:53:49.510733    4493 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.312762402s)
	I0717 10:53:49.510816    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:53:49.521296    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:53:49.531765    4493 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:53:49.624402    4493 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:53:49.727316    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:53:49.832508    4493 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:53:49.846162    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:53:49.857284    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:53:49.953870    4493 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:53:50.012779    4493 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:53:50.012864    4493 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:53:50.016767    4493 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0717 10:53:50.016783    4493 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 10:53:50.016791    4493 command_runner.go:130] > Device: 0,22	Inode: 758         Links: 1
	I0717 10:53:50.016799    4493 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0717 10:53:50.016803    4493 command_runner.go:130] > Access: 2024-07-17 17:53:50.041558660 +0000
	I0717 10:53:50.016832    4493 command_runner.go:130] > Modify: 2024-07-17 17:53:50.041558660 +0000
	I0717 10:53:50.016837    4493 command_runner.go:130] > Change: 2024-07-17 17:53:50.043558660 +0000
	I0717 10:53:50.016841    4493 command_runner.go:130] >  Birth: -
	I0717 10:53:50.017042    4493 start.go:563] Will wait 60s for crictl version
	I0717 10:53:50.017093    4493 ssh_runner.go:195] Run: which crictl
	I0717 10:53:50.021044    4493 command_runner.go:130] > /usr/bin/crictl
	I0717 10:53:50.021140    4493 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:53:50.046022    4493 command_runner.go:130] > Version:  0.1.0
	I0717 10:53:50.046035    4493 command_runner.go:130] > RuntimeName:  docker
	I0717 10:53:50.046039    4493 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0717 10:53:50.046043    4493 command_runner.go:130] > RuntimeApiVersion:  v1
	I0717 10:53:50.047026    4493 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:53:50.047098    4493 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:53:50.063588    4493 command_runner.go:130] > 27.0.3
	I0717 10:53:50.064527    4493 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:53:50.080049    4493 command_runner.go:130] > 27.0.3
	I0717 10:53:50.126676    4493 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:53:50.126730    4493 main.go:141] libmachine: (multinode-875000) Calling .GetIP
	I0717 10:53:50.127131    4493 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:53:50.132176    4493 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:53:50.141837    4493 kubeadm.go:883] updating cluster {Name:multinode-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.2 ClusterName:multinode-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.15 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.17 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 10:53:50.141927    4493 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:53:50.141982    4493 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 10:53:50.153814    4493 command_runner.go:130] > kindest/kindnetd:v20240715-585640e9
	I0717 10:53:50.153827    4493 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0717 10:53:50.153832    4493 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0717 10:53:50.153836    4493 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0717 10:53:50.153839    4493 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0717 10:53:50.153843    4493 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0717 10:53:50.153847    4493 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0717 10:53:50.153850    4493 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0717 10:53:50.153856    4493 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 10:53:50.153860    4493 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0717 10:53:50.154664    4493 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0717 10:53:50.154675    4493 docker.go:615] Images already preloaded, skipping extraction
	I0717 10:53:50.154745    4493 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 10:53:50.167432    4493 command_runner.go:130] > kindest/kindnetd:v20240715-585640e9
	I0717 10:53:50.167445    4493 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0717 10:53:50.167450    4493 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0717 10:53:50.167455    4493 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0717 10:53:50.167460    4493 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0717 10:53:50.167463    4493 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0717 10:53:50.167468    4493 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0717 10:53:50.167473    4493 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0717 10:53:50.167477    4493 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 10:53:50.167481    4493 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0717 10:53:50.168119    4493 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0717 10:53:50.168135    4493 cache_images.go:84] Images are preloaded, skipping loading
	I0717 10:53:50.168143    4493 kubeadm.go:934] updating node { 192.169.0.15 8443 v1.30.2 docker true true} ...
	I0717 10:53:50.168220    4493 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-875000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:53:50.168290    4493 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 10:53:50.185058    4493 command_runner.go:130] > cgroupfs
	I0717 10:53:50.185937    4493 cni.go:84] Creating CNI manager for ""
	I0717 10:53:50.185946    4493 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 10:53:50.185956    4493 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 10:53:50.185976    4493 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.15 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-875000 NodeName:multinode-875000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 10:53:50.186060    4493 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-875000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 10:53:50.186121    4493 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:53:50.193740    4493 command_runner.go:130] > kubeadm
	I0717 10:53:50.193748    4493 command_runner.go:130] > kubectl
	I0717 10:53:50.193752    4493 command_runner.go:130] > kubelet
	I0717 10:53:50.193938    4493 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:53:50.193982    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 10:53:50.201366    4493 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0717 10:53:50.215766    4493 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:53:50.229396    4493 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0717 10:53:50.243072    4493 ssh_runner.go:195] Run: grep 192.169.0.15	control-plane.minikube.internal$ /etc/hosts
	I0717 10:53:50.246029    4493 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:53:50.255461    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:53:50.344956    4493 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:53:50.360068    4493 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000 for IP: 192.169.0.15
	I0717 10:53:50.360081    4493 certs.go:194] generating shared ca certs ...
	I0717 10:53:50.360092    4493 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:53:50.360278    4493 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:53:50.360353    4493 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:53:50.360363    4493 certs.go:256] generating profile certs ...
	I0717 10:53:50.360474    4493 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/client.key
	I0717 10:53:50.360554    4493 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/apiserver.key.20aa8b3c
	I0717 10:53:50.360623    4493 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/proxy-client.key
	I0717 10:53:50.360630    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:53:50.360651    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:53:50.360669    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:53:50.360687    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:53:50.360705    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 10:53:50.360735    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 10:53:50.360768    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 10:53:50.360788    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 10:53:50.360898    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:53:50.360948    4493 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:53:50.360957    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:53:50.360991    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:53:50.361022    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:53:50.361051    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:53:50.361117    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:53:50.361151    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:53:50.361173    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:53:50.361191    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:53:50.361711    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:53:50.402390    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:53:50.429736    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:53:50.455959    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:53:50.476621    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 10:53:50.496672    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 10:53:50.516680    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 10:53:50.536803    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 10:53:50.556794    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:53:50.576815    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:53:50.596757    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:53:50.616657    4493 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 10:53:50.630640    4493 ssh_runner.go:195] Run: openssl version
	I0717 10:53:50.634718    4493 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0717 10:53:50.634860    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:53:50.643295    4493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:53:50.646578    4493 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:53:50.646698    4493 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:53:50.646737    4493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:53:50.650875    4493 command_runner.go:130] > 3ec20f2e
	I0717 10:53:50.651029    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:53:50.659564    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:53:50.667965    4493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:53:50.671241    4493 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:53:50.671375    4493 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:53:50.671413    4493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:53:50.675460    4493 command_runner.go:130] > b5213941
	I0717 10:53:50.675590    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:53:50.683988    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:53:50.692486    4493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:53:50.695766    4493 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:53:50.695880    4493 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:53:50.695911    4493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:53:50.699964    4493 command_runner.go:130] > 51391683
	I0717 10:53:50.700098    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:53:50.708346    4493 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:53:50.711619    4493 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:53:50.711631    4493 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0717 10:53:50.711638    4493 command_runner.go:130] > Device: 253,1	Inode: 531538      Links: 1
	I0717 10:53:50.711649    4493 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 10:53:50.711656    4493 command_runner.go:130] > Access: 2024-07-17 17:49:41.246167045 +0000
	I0717 10:53:50.711661    4493 command_runner.go:130] > Modify: 2024-07-17 17:49:41.246167045 +0000
	I0717 10:53:50.711665    4493 command_runner.go:130] > Change: 2024-07-17 17:49:41.246167045 +0000
	I0717 10:53:50.711669    4493 command_runner.go:130] >  Birth: 2024-07-17 17:49:41.246167045 +0000
	I0717 10:53:50.711776    4493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 10:53:50.716013    4493 command_runner.go:130] > Certificate will not expire
	I0717 10:53:50.716105    4493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 10:53:50.720269    4493 command_runner.go:130] > Certificate will not expire
	I0717 10:53:50.720423    4493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 10:53:50.724600    4493 command_runner.go:130] > Certificate will not expire
	I0717 10:53:50.724775    4493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 10:53:50.728857    4493 command_runner.go:130] > Certificate will not expire
	I0717 10:53:50.728989    4493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 10:53:50.733108    4493 command_runner.go:130] > Certificate will not expire
	I0717 10:53:50.733337    4493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 10:53:50.737381    4493 command_runner.go:130] > Certificate will not expire
	I0717 10:53:50.737570    4493 kubeadm.go:392] StartCluster: {Name:multinode-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:multinode-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.15 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.17 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:53:50.737674    4493 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 10:53:50.750520    4493 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 10:53:50.758116    4493 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0717 10:53:50.758125    4493 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0717 10:53:50.758130    4493 command_runner.go:130] > /var/lib/minikube/etcd:
	I0717 10:53:50.758136    4493 command_runner.go:130] > member
	I0717 10:53:50.758214    4493 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 10:53:50.758226    4493 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 10:53:50.758268    4493 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 10:53:50.765622    4493 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 10:53:50.765931    4493 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-875000" does not appear in /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:53:50.766023    4493 kubeconfig.go:62] /Users/jenkins/minikube-integration/19283-1099/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-875000" cluster setting kubeconfig missing "multinode-875000" context setting]
	I0717 10:53:50.766197    4493 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:53:50.766873    4493 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:53:50.767061    4493 kapi.go:59] client config for multinode-875000: &rest.Config{Host:"https://192.169.0.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xeec6b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 10:53:50.767404    4493 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 10:53:50.767531    4493 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 10:53:50.774780    4493 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.15
	I0717 10:53:50.774798    4493 kubeadm.go:1160] stopping kube-system containers ...
	I0717 10:53:50.774856    4493 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 10:53:50.788254    4493 command_runner.go:130] > 628249f927da
	I0717 10:53:50.788267    4493 command_runner.go:130] > 29731a7ae130
	I0717 10:53:50.788270    4493 command_runner.go:130] > 8d5379f364df
	I0717 10:53:50.788274    4493 command_runner.go:130] > ed1c80ce77a0
	I0717 10:53:50.788277    4493 command_runner.go:130] > f9b27278d789
	I0717 10:53:50.788280    4493 command_runner.go:130] > cdb993aecac1
	I0717 10:53:50.788284    4493 command_runner.go:130] > 6c2c175018f8
	I0717 10:53:50.788287    4493 command_runner.go:130] > 004a5be3ccef
	I0717 10:53:50.788290    4493 command_runner.go:130] > fbeb1615ce07
	I0717 10:53:50.788294    4493 command_runner.go:130] > 2966fb0e7dc1
	I0717 10:53:50.788303    4493 command_runner.go:130] > 6a219499b617
	I0717 10:53:50.788306    4493 command_runner.go:130] > f441455bef84
	I0717 10:53:50.788311    4493 command_runner.go:130] > 4d352419a758
	I0717 10:53:50.788315    4493 command_runner.go:130] > 3f3c486ee3b8
	I0717 10:53:50.788318    4493 command_runner.go:130] > 4355a2bd64f7
	I0717 10:53:50.788322    4493 command_runner.go:130] > c6831086186c
	I0717 10:53:50.788775    4493 docker.go:483] Stopping containers: [628249f927da 29731a7ae130 8d5379f364df ed1c80ce77a0 f9b27278d789 cdb993aecac1 6c2c175018f8 004a5be3ccef fbeb1615ce07 2966fb0e7dc1 6a219499b617 f441455bef84 4d352419a758 3f3c486ee3b8 4355a2bd64f7 c6831086186c]
	I0717 10:53:50.788852    4493 ssh_runner.go:195] Run: docker stop 628249f927da 29731a7ae130 8d5379f364df ed1c80ce77a0 f9b27278d789 cdb993aecac1 6c2c175018f8 004a5be3ccef fbeb1615ce07 2966fb0e7dc1 6a219499b617 f441455bef84 4d352419a758 3f3c486ee3b8 4355a2bd64f7 c6831086186c
	I0717 10:53:50.804816    4493 command_runner.go:130] > 628249f927da
	I0717 10:53:50.804828    4493 command_runner.go:130] > 29731a7ae130
	I0717 10:53:50.804832    4493 command_runner.go:130] > 8d5379f364df
	I0717 10:53:50.804835    4493 command_runner.go:130] > ed1c80ce77a0
	I0717 10:53:50.804839    4493 command_runner.go:130] > f9b27278d789
	I0717 10:53:50.804860    4493 command_runner.go:130] > cdb993aecac1
	I0717 10:53:50.804869    4493 command_runner.go:130] > 6c2c175018f8
	I0717 10:53:50.804872    4493 command_runner.go:130] > 004a5be3ccef
	I0717 10:53:50.804875    4493 command_runner.go:130] > fbeb1615ce07
	I0717 10:53:50.804879    4493 command_runner.go:130] > 2966fb0e7dc1
	I0717 10:53:50.804883    4493 command_runner.go:130] > 6a219499b617
	I0717 10:53:50.804886    4493 command_runner.go:130] > f441455bef84
	I0717 10:53:50.804889    4493 command_runner.go:130] > 4d352419a758
	I0717 10:53:50.804892    4493 command_runner.go:130] > 3f3c486ee3b8
	I0717 10:53:50.804895    4493 command_runner.go:130] > 4355a2bd64f7
	I0717 10:53:50.804898    4493 command_runner.go:130] > c6831086186c
	I0717 10:53:50.804976    4493 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 10:53:50.817155    4493 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 10:53:50.824524    4493 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0717 10:53:50.824535    4493 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0717 10:53:50.824541    4493 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0717 10:53:50.824547    4493 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 10:53:50.824578    4493 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 10:53:50.824584    4493 kubeadm.go:157] found existing configuration files:
	
	I0717 10:53:50.824624    4493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 10:53:50.831830    4493 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 10:53:50.831856    4493 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 10:53:50.831901    4493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 10:53:50.839256    4493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 10:53:50.846349    4493 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 10:53:50.846368    4493 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 10:53:50.846403    4493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 10:53:50.853788    4493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 10:53:50.861130    4493 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 10:53:50.861146    4493 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 10:53:50.861179    4493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 10:53:50.868477    4493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 10:53:50.875549    4493 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 10:53:50.875573    4493 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 10:53:50.875612    4493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 10:53:50.882814    4493 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 10:53:50.890101    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 10:53:50.951322    4493 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 10:53:50.951454    4493 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0717 10:53:50.951640    4493 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0717 10:53:50.951807    4493 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 10:53:50.952063    4493 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0717 10:53:50.952329    4493 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0717 10:53:50.952683    4493 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0717 10:53:50.952827    4493 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0717 10:53:50.952998    4493 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0717 10:53:50.953189    4493 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 10:53:50.953340    4493 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 10:53:50.953548    4493 command_runner.go:130] > [certs] Using the existing "sa" key
	I0717 10:53:50.954497    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 10:53:50.991733    4493 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 10:53:51.385775    4493 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 10:53:51.709655    4493 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 10:53:51.893900    4493 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 10:53:51.988631    4493 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 10:53:52.421536    4493 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 10:53:52.423448    4493 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.468896803s)
	I0717 10:53:52.423462    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 10:53:52.473231    4493 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 10:53:52.473980    4493 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 10:53:52.474003    4493 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0717 10:53:52.580900    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 10:53:52.635178    4493 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 10:53:52.635192    4493 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 10:53:52.636821    4493 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 10:53:52.643807    4493 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 10:53:52.646004    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 10:53:52.730917    4493 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 10:53:52.740305    4493 api_server.go:52] waiting for apiserver process to appear ...
	I0717 10:53:52.740372    4493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:53:53.240795    4493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:53:53.740591    4493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:53:53.752086    4493 command_runner.go:130] > 1676
	I0717 10:53:53.752559    4493 api_server.go:72] duration metric: took 1.012234226s to wait for apiserver process to appear ...
	I0717 10:53:53.752568    4493 api_server.go:88] waiting for apiserver healthz status ...
	I0717 10:53:53.752583    4493 api_server.go:253] Checking apiserver healthz at https://192.169.0.15:8443/healthz ...
	I0717 10:53:55.371559    4493 api_server.go:279] https://192.169.0.15:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 10:53:55.371577    4493 api_server.go:103] status: https://192.169.0.15:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 10:53:55.371588    4493 api_server.go:253] Checking apiserver healthz at https://192.169.0.15:8443/healthz ...
	I0717 10:53:55.398484    4493 api_server.go:279] https://192.169.0.15:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 10:53:55.398499    4493 api_server.go:103] status: https://192.169.0.15:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 10:53:55.753909    4493 api_server.go:253] Checking apiserver healthz at https://192.169.0.15:8443/healthz ...
	I0717 10:53:55.758845    4493 api_server.go:279] https://192.169.0.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 10:53:55.758858    4493 api_server.go:103] status: https://192.169.0.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 10:53:56.254786    4493 api_server.go:253] Checking apiserver healthz at https://192.169.0.15:8443/healthz ...
	I0717 10:53:56.259708    4493 api_server.go:279] https://192.169.0.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 10:53:56.259720    4493 api_server.go:103] status: https://192.169.0.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 10:53:56.753673    4493 api_server.go:253] Checking apiserver healthz at https://192.169.0.15:8443/healthz ...
	I0717 10:53:56.756594    4493 api_server.go:279] https://192.169.0.15:8443/healthz returned 200:
	ok
	I0717 10:53:56.756652    4493 round_trippers.go:463] GET https://192.169.0.15:8443/version
	I0717 10:53:56.756657    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:56.756663    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:56.756669    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:56.761255    4493 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:53:56.761264    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:56.761269    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:56.761273    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:56.761276    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:56.761279    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:56.761281    4493 round_trippers.go:580]     Content-Length: 263
	I0717 10:53:56.761299    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:56 GMT
	I0717 10:53:56.761304    4493 round_trippers.go:580]     Audit-Id: 20314caf-1202-44f4-8996-bf27e6cf6969
	I0717 10:53:56.761324    4493 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0717 10:53:56.761367    4493 api_server.go:141] control plane version: v1.30.2
	I0717 10:53:56.761377    4493 api_server.go:131] duration metric: took 3.008723839s to wait for apiserver health ...
	I0717 10:53:56.761383    4493 cni.go:84] Creating CNI manager for ""
	I0717 10:53:56.761387    4493 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 10:53:56.783921    4493 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 10:53:56.805067    4493 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 10:53:56.811013    4493 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0717 10:53:56.811027    4493 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0717 10:53:56.811033    4493 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0717 10:53:56.811038    4493 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 10:53:56.811044    4493 command_runner.go:130] > Access: 2024-07-17 17:53:18.129065165 +0000
	I0717 10:53:56.811050    4493 command_runner.go:130] > Modify: 2024-07-16 21:31:18.000000000 +0000
	I0717 10:53:56.811058    4493 command_runner.go:130] > Change: 2024-07-17 17:53:16.576065081 +0000
	I0717 10:53:56.811067    4493 command_runner.go:130] >  Birth: -
	I0717 10:53:56.811289    4493 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0717 10:53:56.811297    4493 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 10:53:56.831327    4493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 10:53:57.226431    4493 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0717 10:53:57.253309    4493 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0717 10:53:57.384856    4493 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0717 10:53:57.466701    4493 command_runner.go:130] > daemonset.apps/kindnet configured
	I0717 10:53:57.468049    4493 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 10:53:57.468110    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods
	I0717 10:53:57.468115    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.468121    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.468125    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.470857    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:53:57.470869    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.470878    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.470884    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.470890    4493 round_trippers.go:580]     Audit-Id: be11fdef-3178-4b6b-9b73-af4516117470
	I0717 10:53:57.470895    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.470899    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.470903    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.471831    4493 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"769"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"766","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 87605 chars]
	I0717 10:53:57.474696    4493 system_pods.go:59] 12 kube-system pods found
	I0717 10:53:57.474714    4493 system_pods.go:61] "coredns-7db6d8ff4d-nlwxm" [d9e6c103-3eba-4549-b327-23c87ce480cd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 10:53:57.474719    4493 system_pods.go:61] "etcd-multinode-875000" [b181608e-80a7-4ef3-9702-315fe76bc83b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 10:53:57.474724    4493 system_pods.go:61] "kindnet-fnltt" [31c26a51-23d0-4f20-a716-fbe77e2d1347] Running
	I0717 10:53:57.474728    4493 system_pods.go:61] "kindnet-hwkds" [41b256d2-0784-4ebc-82a6-1d435f44924e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 10:53:57.474731    4493 system_pods.go:61] "kindnet-pj9kh" [fd101f4e-0ee3-45fa-b5ed-0957fb0c87f5] Running
	I0717 10:53:57.474735    4493 system_pods.go:61] "kube-apiserver-multinode-875000" [994530a7-11e7-4b05-95ec-c77751a6c24d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 10:53:57.474741    4493 system_pods.go:61] "kube-controller-manager-multinode-875000" [10a5876c-ddf6-4f37-82ca-96ea7ebde028] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 10:53:57.474744    4493 system_pods.go:61] "kube-proxy-dnn4j" [fd7faf4d-f212-4c89-9ac5-8e408c295411] Running
	I0717 10:53:57.474747    4493 system_pods.go:61] "kube-proxy-tp2zz" [9fda8ef7-b324-4cbb-a8d9-98f93132b2e7] Running
	I0717 10:53:57.474750    4493 system_pods.go:61] "kube-proxy-zs8f8" [9e2bce56-d9e0-42a1-a265-4aab3577b031] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 10:53:57.474755    4493 system_pods.go:61] "kube-scheduler-multinode-875000" [b2f1c23d-635b-490e-a964-c28e1566ead0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 10:53:57.474759    4493 system_pods.go:61] "storage-provisioner" [2bf95484-4db9-4dc1-80b0-b4a35569c9af] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 10:53:57.474764    4493 system_pods.go:74] duration metric: took 6.708549ms to wait for pod list to return data ...
	I0717 10:53:57.474771    4493 node_conditions.go:102] verifying NodePressure condition ...
	I0717 10:53:57.474806    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes
	I0717 10:53:57.474811    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.474816    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.474819    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.477036    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:53:57.477050    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.477058    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.477074    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.477083    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.477086    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.477089    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.477092    4493 round_trippers.go:580]     Audit-Id: b542354e-dc6e-4cf5-bb27-2e1f02e5412a
	I0717 10:53:57.477286    4493 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"769"},"items":[{"metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14802 chars]
	I0717 10:53:57.477811    4493 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:53:57.477823    4493 node_conditions.go:123] node cpu capacity is 2
	I0717 10:53:57.477832    4493 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:53:57.477835    4493 node_conditions.go:123] node cpu capacity is 2
	I0717 10:53:57.477838    4493 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:53:57.477841    4493 node_conditions.go:123] node cpu capacity is 2
	I0717 10:53:57.477844    4493 node_conditions.go:105] duration metric: took 3.069599ms to run NodePressure ...
	I0717 10:53:57.477854    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 10:53:57.638782    4493 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0717 10:53:57.765972    4493 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0717 10:53:57.767197    4493 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 10:53:57.767251    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0717 10:53:57.767256    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.767262    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.767267    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.769104    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:53:57.769115    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.769122    4493 round_trippers.go:580]     Audit-Id: 7ceb3ea3-1596-4dc0-86c1-d925082ba2a2
	I0717 10:53:57.769127    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.769131    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.769135    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.769139    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.769144    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.769518    4493 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"772"},"items":[{"metadata":{"name":"etcd-multinode-875000","namespace":"kube-system","uid":"b181608e-80a7-4ef3-9702-315fe76bc83b","resourceVersion":"764","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.15:2379","kubernetes.io/config.hash":"be038e8a848fb24f787b8a2643981714","kubernetes.io/config.mirror":"be038e8a848fb24f787b8a2643981714","kubernetes.io/config.seen":"2024-07-17T17:49:49.643438469Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30912 chars]
	I0717 10:53:57.770217    4493 kubeadm.go:739] kubelet initialised
	I0717 10:53:57.770227    4493 kubeadm.go:740] duration metric: took 3.020268ms waiting for restarted kubelet to initialise ...
	I0717 10:53:57.770235    4493 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:53:57.770264    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods
	I0717 10:53:57.770269    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.770274    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.770278    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.772341    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:53:57.772349    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.772355    4493 round_trippers.go:580]     Audit-Id: 7427f09f-9885-4387-9ebb-cc9207414853
	I0717 10:53:57.772358    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.772360    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.772363    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.772365    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.772367    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.773154    4493 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"772"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"766","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 87605 chars]
	I0717 10:53:57.775015    4493 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace to be "Ready" ...
	I0717 10:53:57.775053    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nlwxm
	I0717 10:53:57.775058    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.775064    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.775068    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.776272    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:53:57.776282    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.776289    4493 round_trippers.go:580]     Audit-Id: 5297d168-8648-4e01-8cd2-0657b77b7bc7
	I0717 10:53:57.776292    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.776295    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.776297    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.776300    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.776302    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.776563    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"766","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0717 10:53:57.776815    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:53:57.776823    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.776829    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.776834    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.782913    4493 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 10:53:57.782925    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.782931    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.782935    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.782938    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.782940    4493 round_trippers.go:580]     Audit-Id: c976c521-8bea-4439-98cc-ba7021ededb8
	I0717 10:53:57.782943    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.782945    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.783144    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:53:57.783354    4493 pod_ready.go:97] node "multinode-875000" hosting pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:57.783365    4493 pod_ready.go:81] duration metric: took 8.340564ms for pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace to be "Ready" ...
	E0717 10:53:57.783372    4493 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-875000" hosting pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:57.783377    4493 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:53:57.783411    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-875000
	I0717 10:53:57.783416    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.783422    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.783426    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.784965    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:53:57.784973    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.784978    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.784982    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.784985    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.784988    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.784991    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.784993    4493 round_trippers.go:580]     Audit-Id: 21e31e82-6db8-463b-bb85-60fb550aefb1
	I0717 10:53:57.785293    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-875000","namespace":"kube-system","uid":"b181608e-80a7-4ef3-9702-315fe76bc83b","resourceVersion":"764","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.15:2379","kubernetes.io/config.hash":"be038e8a848fb24f787b8a2643981714","kubernetes.io/config.mirror":"be038e8a848fb24f787b8a2643981714","kubernetes.io/config.seen":"2024-07-17T17:49:49.643438469Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0717 10:53:57.785542    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:53:57.785549    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.785554    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.785558    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.786881    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:53:57.786893    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.786898    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.786901    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.786904    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.786907    4493 round_trippers.go:580]     Audit-Id: 639c3cdc-0ed9-4159-a3dc-e0a5147be43b
	I0717 10:53:57.786920    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.786926    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.787108    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:53:57.787281    4493 pod_ready.go:97] node "multinode-875000" hosting pod "etcd-multinode-875000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:57.787291    4493 pod_ready.go:81] duration metric: took 3.908842ms for pod "etcd-multinode-875000" in "kube-system" namespace to be "Ready" ...
	E0717 10:53:57.787297    4493 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-875000" hosting pod "etcd-multinode-875000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:57.787308    4493 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:53:57.787337    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-875000
	I0717 10:53:57.787342    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.787347    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.787355    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.788622    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:53:57.788632    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.788637    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.788640    4493 round_trippers.go:580]     Audit-Id: b08b5c94-9e60-4160-b6a2-9a509b39286e
	I0717 10:53:57.788643    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.788646    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.788649    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.788651    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.788821    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-875000","namespace":"kube-system","uid":"994530a7-11e7-4b05-95ec-c77751a6c24d","resourceVersion":"763","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.15:8443","kubernetes.io/config.hash":"61312bb99a3953bb6d2fa540cea05ce5","kubernetes.io/config.mirror":"61312bb99a3953bb6d2fa540cea05ce5","kubernetes.io/config.seen":"2024-07-17T17:49:49.643441506Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8135 chars]
	I0717 10:53:57.789068    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:53:57.789075    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.789081    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.789086    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.790337    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:53:57.790344    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.790348    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.790351    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.790354    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.790356    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.790359    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.790363    4493 round_trippers.go:580]     Audit-Id: aea73fd6-d62c-4d66-aaf5-b2b8486da07e
	I0717 10:53:57.790523    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:53:57.790695    4493 pod_ready.go:97] node "multinode-875000" hosting pod "kube-apiserver-multinode-875000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:57.790704    4493 pod_ready.go:81] duration metric: took 3.391173ms for pod "kube-apiserver-multinode-875000" in "kube-system" namespace to be "Ready" ...
	E0717 10:53:57.790711    4493 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-875000" hosting pod "kube-apiserver-multinode-875000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:57.790716    4493 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:53:57.790748    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-875000
	I0717 10:53:57.790753    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.790758    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.790762    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.792799    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:53:57.792807    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.792813    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.792817    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.792820    4493 round_trippers.go:580]     Audit-Id: d6dad934-a3f9-4e3d-8e87-bacbb94674b4
	I0717 10:53:57.792823    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.792831    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.792835    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.793248    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-875000","namespace":"kube-system","uid":"10a5876c-ddf6-4f37-82ca-96ea7ebde028","resourceVersion":"762","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2fd022622c52536f8b0b923ecacc7ea2","kubernetes.io/config.mirror":"2fd022622c52536f8b0b923ecacc7ea2","kubernetes.io/config.seen":"2024-07-17T17:49:49.643442180Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7726 chars]
	I0717 10:53:57.868227    4493 request.go:629] Waited for 74.688427ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:53:57.868258    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:53:57.868263    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.868269    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.868274    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.875551    4493 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 10:53:57.875563    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.875568    4493 round_trippers.go:580]     Audit-Id: 6a7ba713-3120-442c-a7e6-c118812e297c
	I0717 10:53:57.875571    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.875573    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.875576    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.875578    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.875581    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.875660    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:53:57.875854    4493 pod_ready.go:97] node "multinode-875000" hosting pod "kube-controller-manager-multinode-875000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:57.875865    4493 pod_ready.go:81] duration metric: took 85.142117ms for pod "kube-controller-manager-multinode-875000" in "kube-system" namespace to be "Ready" ...
	E0717 10:53:57.875893    4493 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-875000" hosting pod "kube-controller-manager-multinode-875000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:57.875903    4493 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dnn4j" in "kube-system" namespace to be "Ready" ...
	I0717 10:53:58.069150    4493 request.go:629] Waited for 193.177485ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dnn4j
	I0717 10:53:58.069278    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dnn4j
	I0717 10:53:58.069287    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:58.069298    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:58.069306    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:58.072012    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:53:58.072026    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:58.072033    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:58 GMT
	I0717 10:53:58.072037    4493 round_trippers.go:580]     Audit-Id: 4536672e-50b0-4b49-9609-ab98d69dfd87
	I0717 10:53:58.072040    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:58.072044    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:58.072048    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:58.072052    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:58.072185    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dnn4j","generateName":"kube-proxy-","namespace":"kube-system","uid":"fd7faf4d-f212-4c89-9ac5-8e408c295411","resourceVersion":"714","creationTimestamp":"2024-07-17T17:51:33Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8a69f9ff-027d-455b-b65a-6c9aef9936e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:51:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a69f9ff-027d-455b-b65a-6c9aef9936e7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0717 10:53:58.269376    4493 request.go:629] Waited for 196.847669ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m03
	I0717 10:53:58.269432    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m03
	I0717 10:53:58.269440    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:58.269452    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:58.269459    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:58.271840    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:53:58.271855    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:58.271862    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:58.271866    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:58.271870    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:58.271873    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:58 GMT
	I0717 10:53:58.271877    4493 round_trippers.go:580]     Audit-Id: 0196d5bf-6ac4-4fea-9bd3-70df2ee429f2
	I0717 10:53:58.271880    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:58.271984    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m03","uid":"4dcfd269-94b0-4652-bd6d-b7d938fc2b6d","resourceVersion":"741","creationTimestamp":"2024-07-17T17:52:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_52_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:52:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3641 chars]
	I0717 10:53:58.272205    4493 pod_ready.go:92] pod "kube-proxy-dnn4j" in "kube-system" namespace has status "Ready":"True"
	I0717 10:53:58.272216    4493 pod_ready.go:81] duration metric: took 396.292994ms for pod "kube-proxy-dnn4j" in "kube-system" namespace to be "Ready" ...
	I0717 10:53:58.272224    4493 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tp2zz" in "kube-system" namespace to be "Ready" ...
	I0717 10:53:58.468795    4493 request.go:629] Waited for 196.519779ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp2zz
	I0717 10:53:58.468870    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp2zz
	I0717 10:53:58.468880    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:58.468891    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:58.468899    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:58.472107    4493 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:53:58.472119    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:58.472125    4493 round_trippers.go:580]     Audit-Id: 39bff14c-6da0-496c-9dec-9d82e6504e3e
	I0717 10:53:58.472129    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:58.472132    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:58.472135    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:58.472152    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:58.472157    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:58 GMT
	I0717 10:53:58.472249    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tp2zz","generateName":"kube-proxy-","namespace":"kube-system","uid":"9fda8ef7-b324-4cbb-a8d9-98f93132b2e7","resourceVersion":"486","creationTimestamp":"2024-07-17T17:50:42Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8a69f9ff-027d-455b-b65a-6c9aef9936e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a69f9ff-027d-455b-b65a-6c9aef9936e7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0717 10:53:58.669099    4493 request.go:629] Waited for 196.505789ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:53:58.669150    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:53:58.669180    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:58.669192    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:58.669200    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:58.671782    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:53:58.671797    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:58.671805    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:58 GMT
	I0717 10:53:58.671810    4493 round_trippers.go:580]     Audit-Id: 11493781-0c12-4492-b335-044080d1446d
	I0717 10:53:58.671813    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:58.671816    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:58.671819    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:58.671823    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:58.671994    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"e92886e5-127c-42d8-b0f7-76db7895a433","resourceVersion":"553","creationTimestamp":"2024-07-17T17:50:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_50_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3824 chars]
	I0717 10:53:58.672221    4493 pod_ready.go:92] pod "kube-proxy-tp2zz" in "kube-system" namespace has status "Ready":"True"
	I0717 10:53:58.672233    4493 pod_ready.go:81] duration metric: took 399.991658ms for pod "kube-proxy-tp2zz" in "kube-system" namespace to be "Ready" ...
	I0717 10:53:58.672242    4493 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zs8f8" in "kube-system" namespace to be "Ready" ...
	I0717 10:53:58.870210    4493 request.go:629] Waited for 197.913414ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zs8f8
	I0717 10:53:58.870347    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zs8f8
	I0717 10:53:58.870359    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:58.870370    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:58.870376    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:58.872970    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:53:58.872987    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:58.872995    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:58.872999    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:58.873003    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:58 GMT
	I0717 10:53:58.873007    4493 round_trippers.go:580]     Audit-Id: ca1db5d7-ea4c-4c14-993f-721dc53ac6a0
	I0717 10:53:58.873010    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:58.873013    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:58.873094    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zs8f8","generateName":"kube-proxy-","namespace":"kube-system","uid":"9e2bce56-d9e0-42a1-a265-4aab3577b031","resourceVersion":"774","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8a69f9ff-027d-455b-b65a-6c9aef9936e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a69f9ff-027d-455b-b65a-6c9aef9936e7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0717 10:53:59.069076    4493 request.go:629] Waited for 195.635449ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:53:59.069169    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:53:59.069176    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:59.069184    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:59.069190    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:59.071286    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:53:59.071307    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:59.071313    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:59 GMT
	I0717 10:53:59.071318    4493 round_trippers.go:580]     Audit-Id: e9daa319-d3fc-4813-b7af-f68df3e30559
	I0717 10:53:59.071346    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:59.071353    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:59.071356    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:59.071359    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:59.071435    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:53:59.071638    4493 pod_ready.go:97] node "multinode-875000" hosting pod "kube-proxy-zs8f8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:59.071652    4493 pod_ready.go:81] duration metric: took 399.394541ms for pod "kube-proxy-zs8f8" in "kube-system" namespace to be "Ready" ...
	E0717 10:53:59.071660    4493 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-875000" hosting pod "kube-proxy-zs8f8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:59.071665    4493 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:53:59.269426    4493 request.go:629] Waited for 197.709805ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-875000
	I0717 10:53:59.269482    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-875000
	I0717 10:53:59.269491    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:59.269518    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:59.269586    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:59.273267    4493 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:53:59.273281    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:59.273286    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:59.273290    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:59.273294    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:59.273297    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:59 GMT
	I0717 10:53:59.273301    4493 round_trippers.go:580]     Audit-Id: b4328630-f3e1-42b8-8900-f4dbf39dfdce
	I0717 10:53:59.273304    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:59.273390    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-875000","namespace":"kube-system","uid":"b2f1c23d-635b-490e-a964-c28e1566ead0","resourceVersion":"761","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0081164f6599d019e7f075e4c8b7277","kubernetes.io/config.mirror":"e0081164f6599d019e7f075e4c8b7277","kubernetes.io/config.seen":"2024-07-17T17:49:49.643442746Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5438 chars]
	I0717 10:53:59.469363    4493 request.go:629] Waited for 195.721843ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:53:59.469531    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:53:59.469550    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:59.469565    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:59.469572    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:59.472420    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:53:59.472436    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:59.472443    4493 round_trippers.go:580]     Audit-Id: ee40a7d2-de32-4530-8134-79ca8c2b1e97
	I0717 10:53:59.472448    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:59.472452    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:59.472456    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:59.472459    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:59.472463    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:59 GMT
	I0717 10:53:59.472548    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:53:59.472792    4493 pod_ready.go:97] node "multinode-875000" hosting pod "kube-scheduler-multinode-875000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:59.472806    4493 pod_ready.go:81] duration metric: took 401.124044ms for pod "kube-scheduler-multinode-875000" in "kube-system" namespace to be "Ready" ...
	E0717 10:53:59.472814    4493 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-875000" hosting pod "kube-scheduler-multinode-875000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:59.472820    4493 pod_ready.go:38] duration metric: took 1.702533365s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:53:59.472837    4493 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 10:53:59.484509    4493 command_runner.go:130] > -16
	I0717 10:53:59.484566    4493 ops.go:34] apiserver oom_adj: -16
	I0717 10:53:59.484574    4493 kubeadm.go:597] duration metric: took 8.726108957s to restartPrimaryControlPlane
	I0717 10:53:59.484580    4493 kubeadm.go:394] duration metric: took 8.746781959s to StartCluster
	I0717 10:53:59.484590    4493 settings.go:142] acquiring lock: {Name:mkc45f011a907c66e2dbca7dadfff37ab48f7d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:53:59.484676    4493 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:53:59.485028    4493 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:53:59.485929    4493 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.15 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:53:59.485962    4493 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 10:53:59.486078    4493 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:53:59.527264    4493 out.go:177] * Verifying Kubernetes components...
	I0717 10:53:59.570382    4493 out.go:177] * Enabled addons: 
	I0717 10:53:59.591289    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:53:59.612289    4493 addons.go:510] duration metric: took 126.33187ms for enable addons: enabled=[]
	I0717 10:53:59.740151    4493 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:53:59.753734    4493 node_ready.go:35] waiting up to 6m0s for node "multinode-875000" to be "Ready" ...
	I0717 10:53:59.753791    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:53:59.753796    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:59.753802    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:59.753806    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:59.755382    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:53:59.755391    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:59.755397    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:59.755406    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:59.755409    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:59.755411    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:59.755415    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:59 GMT
	I0717 10:53:59.755417    4493 round_trippers.go:580]     Audit-Id: 46b5f3ac-1390-41c3-9d17-19db56ef8579
	I0717 10:53:59.755583    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:00.255453    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:00.255476    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:00.255488    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:00.255496    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:00.257693    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:00.257723    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:00.257769    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:00.257783    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:00.257790    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:00.257799    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:00.257806    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:00 GMT
	I0717 10:54:00.257810    4493 round_trippers.go:580]     Audit-Id: 25ece10c-b1e8-46f7-acb4-98a0ba7f80c4
	I0717 10:54:00.258014    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:00.754085    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:00.754107    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:00.754120    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:00.754125    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:00.756392    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:00.756405    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:00.756413    4493 round_trippers.go:580]     Audit-Id: f3b9f318-4fff-4032-9861-017f3ba37862
	I0717 10:54:00.756417    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:00.756420    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:00.756423    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:00.756426    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:00.756433    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:00 GMT
	I0717 10:54:00.756610    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:01.254735    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:01.254760    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:01.254771    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:01.254779    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:01.257145    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:01.257159    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:01.257166    4493 round_trippers.go:580]     Audit-Id: e251b3a7-69fb-4223-9473-88d54919cd71
	I0717 10:54:01.257171    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:01.257176    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:01.257181    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:01.257185    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:01.257190    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:01 GMT
	I0717 10:54:01.257449    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:01.754015    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:01.754036    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:01.754048    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:01.754054    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:01.756245    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:01.756275    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:01.756293    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:01 GMT
	I0717 10:54:01.756302    4493 round_trippers.go:580]     Audit-Id: 89741957-bf55-43f0-9f9e-46c8b05fa7ae
	I0717 10:54:01.756310    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:01.756315    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:01.756321    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:01.756339    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:01.756521    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:01.756757    4493 node_ready.go:53] node "multinode-875000" has status "Ready":"False"
	I0717 10:54:02.254059    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:02.254143    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:02.254165    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:02.254173    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:02.257395    4493 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:54:02.257410    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:02.257417    4493 round_trippers.go:580]     Audit-Id: 9a205d18-8abf-468d-818c-232155c31735
	I0717 10:54:02.257433    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:02.257439    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:02.257442    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:02.257446    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:02.257451    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:02 GMT
	I0717 10:54:02.257753    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:02.754830    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:02.754843    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:02.754850    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:02.754854    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:02.756697    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:02.756715    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:02.756723    4493 round_trippers.go:580]     Audit-Id: 116c98cd-f772-4d28-a72e-2ab93e007f94
	I0717 10:54:02.756726    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:02.756729    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:02.756738    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:02.756743    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:02.756747    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:02 GMT
	I0717 10:54:02.756874    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:03.255286    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:03.255306    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:03.255319    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:03.255326    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:03.257932    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:03.257946    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:03.257953    4493 round_trippers.go:580]     Audit-Id: e4a16496-ff7d-4de5-ad6c-fb858787cf4e
	I0717 10:54:03.257957    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:03.257961    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:03.257966    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:03.257970    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:03.257975    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:03 GMT
	I0717 10:54:03.258077    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:03.754131    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:03.754186    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:03.754289    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:03.754305    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:03.756754    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:03.756775    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:03.756782    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:03 GMT
	I0717 10:54:03.756787    4493 round_trippers.go:580]     Audit-Id: 37090f35-74c4-4514-aaad-3d4684c670ad
	I0717 10:54:03.756803    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:03.756808    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:03.756813    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:03.756817    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:03.756889    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:03.757153    4493 node_ready.go:53] node "multinode-875000" has status "Ready":"False"
	I0717 10:54:04.254859    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:04.254882    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:04.254975    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:04.254984    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:04.257542    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:04.257556    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:04.257564    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:04 GMT
	I0717 10:54:04.257569    4493 round_trippers.go:580]     Audit-Id: e57b6670-108b-4e43-9146-26c87210969f
	I0717 10:54:04.257573    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:04.257577    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:04.257580    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:04.257583    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:04.257777    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:04.755386    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:04.755410    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:04.755459    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:04.755468    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:04.757807    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:04.757823    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:04.757834    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:04.757840    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:04.757844    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:04.757848    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:04 GMT
	I0717 10:54:04.757852    4493 round_trippers.go:580]     Audit-Id: 10215338-e734-4148-a946-3f9c852e0f8f
	I0717 10:54:04.757855    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:04.757974    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:05.254230    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:05.254259    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:05.254272    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:05.254278    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:05.257225    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:05.257241    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:05.257248    4493 round_trippers.go:580]     Audit-Id: ea906fc0-0e2a-4e41-acec-5fa673dcc27b
	I0717 10:54:05.257254    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:05.257258    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:05.257262    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:05.257266    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:05.257270    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:05 GMT
	I0717 10:54:05.257382    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:05.755232    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:05.755255    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:05.755266    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:05.755274    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:05.757968    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:05.757983    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:05.757990    4493 round_trippers.go:580]     Audit-Id: 4aac2d8d-5058-4a61-88ec-cc6a2ff69089
	I0717 10:54:05.757995    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:05.757999    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:05.758004    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:05.758007    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:05.758011    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:05 GMT
	I0717 10:54:05.758158    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:05.758417    4493 node_ready.go:53] node "multinode-875000" has status "Ready":"False"
	I0717 10:54:06.254111    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:06.254143    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:06.254197    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:06.254205    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:06.256826    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:06.256841    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:06.256849    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:06.256853    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:06.256856    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:06.256859    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:06 GMT
	I0717 10:54:06.256862    4493 round_trippers.go:580]     Audit-Id: 8072bf4f-9ef7-4723-ae40-96049993c191
	I0717 10:54:06.256866    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:06.256977    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:06.755692    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:06.755717    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:06.755728    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:06.755735    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:06.762687    4493 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 10:54:06.762704    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:06.762714    4493 round_trippers.go:580]     Audit-Id: efe793e6-64e8-4a1a-a1e7-f8a6763d1215
	I0717 10:54:06.762720    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:06.762725    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:06.762733    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:06.762739    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:06.762744    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:06 GMT
	I0717 10:54:06.763501    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:07.254211    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:07.254228    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:07.254236    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:07.254240    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:07.256316    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:07.256324    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:07.256330    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:07.256333    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:07.256335    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:07.256338    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:07 GMT
	I0717 10:54:07.256340    4493 round_trippers.go:580]     Audit-Id: 4f8184c2-cbb4-49e1-a13b-697efb477d7f
	I0717 10:54:07.256343    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:07.256595    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:07.754181    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:07.754197    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:07.754205    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:07.754211    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:07.756334    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:07.756343    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:07.756347    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:07.756351    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:07.756354    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:07.756358    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:07 GMT
	I0717 10:54:07.756363    4493 round_trippers.go:580]     Audit-Id: 0619515c-b586-4f1f-9e0c-08fb4d659c1f
	I0717 10:54:07.756366    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:07.756560    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"863","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5421 chars]
	I0717 10:54:08.254303    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:08.254319    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:08.254328    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:08.254333    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:08.256382    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:08.256391    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:08.256397    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:08 GMT
	I0717 10:54:08.256400    4493 round_trippers.go:580]     Audit-Id: e8abd655-64a1-49d4-8642-25ef654dc343
	I0717 10:54:08.256403    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:08.256412    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:08.256416    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:08.256421    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:08.256504    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"866","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0717 10:54:08.256688    4493 node_ready.go:53] node "multinode-875000" has status "Ready":"False"
	I0717 10:54:08.754929    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:08.754949    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:08.754961    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:08.754967    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:08.758213    4493 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:54:08.758235    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:08.758249    4493 round_trippers.go:580]     Audit-Id: 7972f044-cdf4-49a3-8a3d-625257bc3f8a
	I0717 10:54:08.758254    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:08.758258    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:08.758262    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:08.758301    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:08.758309    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:08 GMT
	I0717 10:54:08.758635    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"866","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0717 10:54:09.255617    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:09.255644    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:09.255713    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:09.255725    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:09.258267    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:09.258284    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:09.258293    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:09.258312    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:09 GMT
	I0717 10:54:09.258318    4493 round_trippers.go:580]     Audit-Id: 5f3509ed-2ccb-410e-8369-07df94c46387
	I0717 10:54:09.258322    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:09.258325    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:09.258348    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:09.258942    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"866","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0717 10:54:09.754128    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:09.754139    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:09.754145    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:09.754148    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:09.755581    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:09.755591    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:09.755595    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:09.755613    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:09.755628    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:09 GMT
	I0717 10:54:09.755638    4493 round_trippers.go:580]     Audit-Id: 25fbb815-9cb9-4e6f-b484-358d12aa1b97
	I0717 10:54:09.755647    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:09.755652    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:09.755740    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"866","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0717 10:54:10.254758    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:10.254791    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:10.254803    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:10.254811    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:10.257339    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:10.257356    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:10.257364    4493 round_trippers.go:580]     Audit-Id: 63e8fdd7-2ee9-4d35-bf4d-e13f2a8e7298
	I0717 10:54:10.257369    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:10.257372    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:10.257376    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:10.257380    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:10.257383    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:10 GMT
	I0717 10:54:10.257497    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"874","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0717 10:54:10.257747    4493 node_ready.go:49] node "multinode-875000" has status "Ready":"True"
	I0717 10:54:10.257763    4493 node_ready.go:38] duration metric: took 10.503727197s for node "multinode-875000" to be "Ready" ...
	I0717 10:54:10.257771    4493 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:54:10.257813    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods
	I0717 10:54:10.257819    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:10.257826    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:10.257832    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:10.260186    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:10.260197    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:10.260202    4493 round_trippers.go:580]     Audit-Id: 07867cca-d61d-41af-a776-f046997c3879
	I0717 10:54:10.260207    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:10.260211    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:10.260214    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:10.260218    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:10.260223    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:10 GMT
	I0717 10:54:10.261217    4493 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"874"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"766","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 86544 chars]
	I0717 10:54:10.263021    4493 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:10.263058    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nlwxm
	I0717 10:54:10.263062    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:10.263068    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:10.263072    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:10.264134    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:10.264141    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:10.264145    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:10.264149    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:10 GMT
	I0717 10:54:10.264152    4493 round_trippers.go:580]     Audit-Id: 7ac35786-0221-4baa-a577-4b3196cea35f
	I0717 10:54:10.264155    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:10.264157    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:10.264166    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:10.264311    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"766","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0717 10:54:10.264544    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:10.264551    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:10.264556    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:10.264559    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:10.265861    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:10.265869    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:10.265877    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:10.265881    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:10.265885    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:10 GMT
	I0717 10:54:10.265888    4493 round_trippers.go:580]     Audit-Id: 82dd2eae-9d22-44f3-aeaa-831bc057e4b6
	I0717 10:54:10.265891    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:10.265895    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:10.266054    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"874","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0717 10:54:10.763930    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nlwxm
	I0717 10:54:10.763951    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:10.763963    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:10.763969    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:10.766465    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:10.766478    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:10.766504    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:10.766517    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:10.766524    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:10 GMT
	I0717 10:54:10.766532    4493 round_trippers.go:580]     Audit-Id: 639f1710-1c3b-454c-a996-ffd9332bba25
	I0717 10:54:10.766538    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:10.766544    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:10.766747    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"766","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0717 10:54:10.767149    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:10.767159    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:10.767167    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:10.767172    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:10.768554    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:10.768562    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:10.768566    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:10.768571    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:10.768576    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:10 GMT
	I0717 10:54:10.768580    4493 round_trippers.go:580]     Audit-Id: 71e424b7-052c-4661-8537-426df69d70bd
	I0717 10:54:10.768585    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:10.768588    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:10.768726    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"874","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0717 10:54:11.264901    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nlwxm
	I0717 10:54:11.264931    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:11.264945    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:11.264951    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:11.267695    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:11.267711    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:11.267718    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:11.267722    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:11.267727    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:11.267730    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:11.267734    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:11 GMT
	I0717 10:54:11.267737    4493 round_trippers.go:580]     Audit-Id: f4529cd1-ed9e-424c-a40c-ef0c63483fc1
	I0717 10:54:11.267816    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"766","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0717 10:54:11.268182    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:11.268191    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:11.268200    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:11.268204    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:11.269426    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:11.269437    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:11.269444    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:11.269457    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:11.269463    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:11.269467    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:11 GMT
	I0717 10:54:11.269469    4493 round_trippers.go:580]     Audit-Id: 6d01dc6f-b546-4d5f-98ad-8758a2bd0883
	I0717 10:54:11.269472    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:11.269588    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"874","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0717 10:54:11.763239    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nlwxm
	I0717 10:54:11.763259    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:11.763267    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:11.763273    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:11.781636    4493 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0717 10:54:11.781648    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:11.781653    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:11.781657    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:11.781659    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:11.781661    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:11 GMT
	I0717 10:54:11.781665    4493 round_trippers.go:580]     Audit-Id: 0cc8d777-8a0f-4cc8-aec4-50dd131b8dcc
	I0717 10:54:11.781667    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:11.781821    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"766","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0717 10:54:11.782102    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:11.782109    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:11.782115    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:11.782118    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:11.783465    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:11.783476    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:11.783481    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:11 GMT
	I0717 10:54:11.783485    4493 round_trippers.go:580]     Audit-Id: 11b00e20-fa3e-4583-873b-fb18bc000c5f
	I0717 10:54:11.783489    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:11.783491    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:11.783495    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:11.783504    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:11.783737    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"874","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0717 10:54:12.263974    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nlwxm
	I0717 10:54:12.263997    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:12.264008    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:12.264014    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:12.266338    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:12.266349    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:12.266356    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:12.266360    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:12.266365    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:12 GMT
	I0717 10:54:12.266369    4493 round_trippers.go:580]     Audit-Id: e87ccaf0-d00c-41fd-8c96-4af67af66ae5
	I0717 10:54:12.266374    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:12.266377    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:12.266664    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"766","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0717 10:54:12.266936    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:12.266943    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:12.266949    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:12.266953    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:12.268163    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:12.268172    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:12.268177    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:12.268180    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:12.268184    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:12.268186    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:12 GMT
	I0717 10:54:12.268190    4493 round_trippers.go:580]     Audit-Id: af4810ca-5ed3-4340-86e5-a55da617acac
	I0717 10:54:12.268193    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:12.268265    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"874","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0717 10:54:12.268439    4493 pod_ready.go:102] pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace has status "Ready":"False"
	I0717 10:54:12.763313    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nlwxm
	I0717 10:54:12.763326    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:12.763332    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:12.763337    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:12.765317    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:12.765328    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:12.765333    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:12.765340    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:12 GMT
	I0717 10:54:12.765344    4493 round_trippers.go:580]     Audit-Id: 64ee0eff-0ef6-4b21-a7d2-f58f3cde573b
	I0717 10:54:12.765347    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:12.765352    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:12.765355    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:12.765527    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"766","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0717 10:54:12.765805    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:12.765812    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:12.765818    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:12.765822    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:12.774292    4493 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 10:54:12.774304    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:12.774309    4493 round_trippers.go:580]     Audit-Id: 2078b5f7-f16e-4a6b-b755-fe07f89a7880
	I0717 10:54:12.774313    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:12.774315    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:12.774332    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:12.774339    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:12.774341    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:12 GMT
	I0717 10:54:12.774457    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:54:13.263384    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nlwxm
	I0717 10:54:13.263405    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.263417    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.263423    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.265900    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:13.265912    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.265919    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.265946    4493 round_trippers.go:580]     Audit-Id: dea831ee-8530-481a-80eb-5da3319467b4
	I0717 10:54:13.265956    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.265961    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.265965    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.265975    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.266129    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"895","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6783 chars]
	I0717 10:54:13.266490    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:13.266497    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.266503    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.266506    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.267669    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:13.267677    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.267682    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.267684    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.267687    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.267691    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.267694    4493 round_trippers.go:580]     Audit-Id: bbfe4b42-85c9-439c-8d92-08e6f6a64ee5
	I0717 10:54:13.267697    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.267986    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:54:13.268154    4493 pod_ready.go:92] pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace has status "Ready":"True"
	I0717 10:54:13.268163    4493 pod_ready.go:81] duration metric: took 3.005051569s for pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.268172    4493 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.268203    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-875000
	I0717 10:54:13.268207    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.268213    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.268217    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.269265    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:13.269274    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.269279    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.269283    4493 round_trippers.go:580]     Audit-Id: 73a01a19-0d1f-449f-b94e-c4171f6e316f
	I0717 10:54:13.269285    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.269288    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.269292    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.269295    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.269400    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-875000","namespace":"kube-system","uid":"b181608e-80a7-4ef3-9702-315fe76bc83b","resourceVersion":"868","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.15:2379","kubernetes.io/config.hash":"be038e8a848fb24f787b8a2643981714","kubernetes.io/config.mirror":"be038e8a848fb24f787b8a2643981714","kubernetes.io/config.seen":"2024-07-17T17:49:49.643438469Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6357 chars]
	I0717 10:54:13.269623    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:13.269630    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.269636    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.269639    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.270650    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:13.270657    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.270663    4493 round_trippers.go:580]     Audit-Id: 4c5f293c-35dd-434f-b207-81722a7d3607
	I0717 10:54:13.270666    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.270669    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.270671    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.270674    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.270678    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.270867    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:54:13.271029    4493 pod_ready.go:92] pod "etcd-multinode-875000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:54:13.271037    4493 pod_ready.go:81] duration metric: took 2.859825ms for pod "etcd-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.271048    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.271074    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-875000
	I0717 10:54:13.271078    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.271083    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.271086    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.272092    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:54:13.272099    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.272103    4493 round_trippers.go:580]     Audit-Id: 46275850-dae1-44fa-bb5a-d0ae062b1988
	I0717 10:54:13.272107    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.272110    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.272120    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.272123    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.272125    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.272245    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-875000","namespace":"kube-system","uid":"994530a7-11e7-4b05-95ec-c77751a6c24d","resourceVersion":"872","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.15:8443","kubernetes.io/config.hash":"61312bb99a3953bb6d2fa540cea05ce5","kubernetes.io/config.mirror":"61312bb99a3953bb6d2fa540cea05ce5","kubernetes.io/config.seen":"2024-07-17T17:49:49.643441506Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0717 10:54:13.272462    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:13.272469    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.272475    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.272479    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.273341    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:54:13.273350    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.273359    4493 round_trippers.go:580]     Audit-Id: 3988e007-6201-47dc-b623-0b3930a1efd3
	I0717 10:54:13.273364    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.273369    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.273372    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.273377    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.273386    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.273509    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:54:13.273672    4493 pod_ready.go:92] pod "kube-apiserver-multinode-875000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:54:13.273680    4493 pod_ready.go:81] duration metric: took 2.627644ms for pod "kube-apiserver-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.273686    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.273713    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-875000
	I0717 10:54:13.273718    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.273723    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.273727    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.274692    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:54:13.274702    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.274709    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.274713    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.274715    4493 round_trippers.go:580]     Audit-Id: 1c515b9e-1497-4f78-b89a-367c2ae6ba35
	I0717 10:54:13.274733    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.274737    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.274741    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.274841    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-875000","namespace":"kube-system","uid":"10a5876c-ddf6-4f37-82ca-96ea7ebde028","resourceVersion":"875","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2fd022622c52536f8b0b923ecacc7ea2","kubernetes.io/config.mirror":"2fd022622c52536f8b0b923ecacc7ea2","kubernetes.io/config.seen":"2024-07-17T17:49:49.643442180Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0717 10:54:13.275068    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:13.275075    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.275081    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.275084    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.275968    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:54:13.275974    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.275978    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.275982    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.275984    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.275988    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.275991    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.275994    4493 round_trippers.go:580]     Audit-Id: f03b2211-9d94-4ef0-ba6b-bcb945afcb10
	I0717 10:54:13.276086    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:54:13.276244    4493 pod_ready.go:92] pod "kube-controller-manager-multinode-875000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:54:13.276251    4493 pod_ready.go:81] duration metric: took 2.559695ms for pod "kube-controller-manager-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.276258    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dnn4j" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.276284    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dnn4j
	I0717 10:54:13.276289    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.276295    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.276298    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.277072    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:54:13.277078    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.277082    4493 round_trippers.go:580]     Audit-Id: 95cf34fc-1e1b-4a95-be0f-8ea41b1d3af3
	I0717 10:54:13.277086    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.277089    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.277095    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.277098    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.277101    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.277277    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dnn4j","generateName":"kube-proxy-","namespace":"kube-system","uid":"fd7faf4d-f212-4c89-9ac5-8e408c295411","resourceVersion":"714","creationTimestamp":"2024-07-17T17:51:33Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8a69f9ff-027d-455b-b65a-6c9aef9936e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:51:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a69f9ff-027d-455b-b65a-6c9aef9936e7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0717 10:54:13.277490    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m03
	I0717 10:54:13.277497    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.277503    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.277506    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.278360    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:54:13.278367    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.278372    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.278376    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.278379    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.278382    4493 round_trippers.go:580]     Audit-Id: 4ccbd8da-3f86-4bed-a7d3-60c99729f14a
	I0717 10:54:13.278387    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.278390    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.278486    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m03","uid":"4dcfd269-94b0-4652-bd6d-b7d938fc2b6d","resourceVersion":"741","creationTimestamp":"2024-07-17T17:52:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_52_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:52:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3641 chars]
	I0717 10:54:13.278623    4493 pod_ready.go:92] pod "kube-proxy-dnn4j" in "kube-system" namespace has status "Ready":"True"
	I0717 10:54:13.278630    4493 pod_ready.go:81] duration metric: took 2.368204ms for pod "kube-proxy-dnn4j" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.278637    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tp2zz" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.464644    4493 request.go:629] Waited for 185.945191ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp2zz
	I0717 10:54:13.464688    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp2zz
	I0717 10:54:13.464696    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.464709    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.464720    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.467132    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:13.467146    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.467156    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.467164    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.467171    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.467176    4493 round_trippers.go:580]     Audit-Id: ea4bef63-f5a4-4592-b6b7-6c8be4654625
	I0717 10:54:13.467183    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.467186    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.467358    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tp2zz","generateName":"kube-proxy-","namespace":"kube-system","uid":"9fda8ef7-b324-4cbb-a8d9-98f93132b2e7","resourceVersion":"486","creationTimestamp":"2024-07-17T17:50:42Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8a69f9ff-027d-455b-b65a-6c9aef9936e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a69f9ff-027d-455b-b65a-6c9aef9936e7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0717 10:54:13.663665    4493 request.go:629] Waited for 195.975703ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:54:13.663719    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:54:13.663729    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.663742    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.663749    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.666750    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:13.666761    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.666768    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.666772    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.666777    4493 round_trippers.go:580]     Audit-Id: 6dca5e6e-7ea6-4165-8f45-6234c65ce6ef
	I0717 10:54:13.666781    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.666786    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.666789    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.667152    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"e92886e5-127c-42d8-b0f7-76db7895a433","resourceVersion":"553","creationTimestamp":"2024-07-17T17:50:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_50_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3824 chars]
	I0717 10:54:13.667319    4493 pod_ready.go:92] pod "kube-proxy-tp2zz" in "kube-system" namespace has status "Ready":"True"
	I0717 10:54:13.667328    4493 pod_ready.go:81] duration metric: took 388.674095ms for pod "kube-proxy-tp2zz" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.667334    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zs8f8" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.865064    4493 request.go:629] Waited for 197.643779ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zs8f8
	I0717 10:54:13.865205    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zs8f8
	I0717 10:54:13.865214    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.865228    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.865236    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.868015    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:13.868027    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.868035    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.868039    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.868042    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.868046    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.868049    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.868052    4493 round_trippers.go:580]     Audit-Id: e223f844-6428-4437-ba2c-aa4b12136065
	I0717 10:54:13.868219    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zs8f8","generateName":"kube-proxy-","namespace":"kube-system","uid":"9e2bce56-d9e0-42a1-a265-4aab3577b031","resourceVersion":"774","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8a69f9ff-027d-455b-b65a-6c9aef9936e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a69f9ff-027d-455b-b65a-6c9aef9936e7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0717 10:54:14.064048    4493 request.go:629] Waited for 195.483185ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:14.064180    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:14.064188    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:14.064196    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:14.064202    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:14.066888    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:14.066898    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:14.066903    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:14.066908    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:14.066913    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:14 GMT
	I0717 10:54:14.066917    4493 round_trippers.go:580]     Audit-Id: 1afc25f1-31c7-49a8-a875-9ca832383835
	I0717 10:54:14.066924    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:14.066928    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:14.067005    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:54:14.067189    4493 pod_ready.go:92] pod "kube-proxy-zs8f8" in "kube-system" namespace has status "Ready":"True"
	I0717 10:54:14.067198    4493 pod_ready.go:81] duration metric: took 399.84786ms for pod "kube-proxy-zs8f8" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:14.067205    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:14.263903    4493 request.go:629] Waited for 196.644333ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-875000
	I0717 10:54:14.264000    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-875000
	I0717 10:54:14.264010    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:14.264020    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:14.264027    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:14.266779    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:14.266795    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:14.266802    4493 round_trippers.go:580]     Audit-Id: 7ca8298e-839c-4aba-84ea-dffab2142eef
	I0717 10:54:14.266808    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:14.266815    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:14.266818    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:14.266821    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:14.266825    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:14 GMT
	I0717 10:54:14.266915    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-875000","namespace":"kube-system","uid":"b2f1c23d-635b-490e-a964-c28e1566ead0","resourceVersion":"877","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0081164f6599d019e7f075e4c8b7277","kubernetes.io/config.mirror":"e0081164f6599d019e7f075e4c8b7277","kubernetes.io/config.seen":"2024-07-17T17:49:49.643442746Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0717 10:54:14.464779    4493 request.go:629] Waited for 197.568211ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:14.464861    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:14.464873    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:14.464884    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:14.464891    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:14.467251    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:14.467270    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:14.467280    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:14.467286    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:14.467291    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:14.467296    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:14.467301    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:14 GMT
	I0717 10:54:14.467308    4493 round_trippers.go:580]     Audit-Id: e840e5b4-4625-4177-8f0d-ce3feb728bc4
	I0717 10:54:14.467505    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:54:14.467750    4493 pod_ready.go:92] pod "kube-scheduler-multinode-875000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:54:14.467761    4493 pod_ready.go:81] duration metric: took 400.535511ms for pod "kube-scheduler-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:14.467770    4493 pod_ready.go:38] duration metric: took 4.209877934s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:54:14.467784    4493 api_server.go:52] waiting for apiserver process to appear ...
	I0717 10:54:14.467850    4493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:54:14.480570    4493 command_runner.go:130] > 1676
	I0717 10:54:14.480661    4493 api_server.go:72] duration metric: took 14.994312803s to wait for apiserver process to appear ...
	I0717 10:54:14.480671    4493 api_server.go:88] waiting for apiserver healthz status ...
	I0717 10:54:14.480681    4493 api_server.go:253] Checking apiserver healthz at https://192.169.0.15:8443/healthz ...
	I0717 10:54:14.484062    4493 api_server.go:279] https://192.169.0.15:8443/healthz returned 200:
	ok
	I0717 10:54:14.484095    4493 round_trippers.go:463] GET https://192.169.0.15:8443/version
	I0717 10:54:14.484100    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:14.484116    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:14.484122    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:14.484530    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:54:14.484536    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:14.484541    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:14.484544    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:14.484547    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:14.484565    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:14.484572    4493 round_trippers.go:580]     Content-Length: 263
	I0717 10:54:14.484575    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:14 GMT
	I0717 10:54:14.484579    4493 round_trippers.go:580]     Audit-Id: 34880f8e-1473-47f5-8b2a-9d08ec58e191
	I0717 10:54:14.484587    4493 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0717 10:54:14.484609    4493 api_server.go:141] control plane version: v1.30.2
	I0717 10:54:14.484617    4493 api_server.go:131] duration metric: took 3.941657ms to wait for apiserver health ...
	I0717 10:54:14.484622    4493 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 10:54:14.663731    4493 request.go:629] Waited for 179.021176ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods
	I0717 10:54:14.663779    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods
	I0717 10:54:14.663787    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:14.663797    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:14.663803    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:14.667437    4493 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:54:14.667448    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:14.667453    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:14.667458    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:14.667461    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:14 GMT
	I0717 10:54:14.667463    4493 round_trippers.go:580]     Audit-Id: b65801a4-2b15-427a-9824-3de9a8975246
	I0717 10:54:14.667465    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:14.667467    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:14.668120    4493 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"900"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"895","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85985 chars]
	I0717 10:54:14.669910    4493 system_pods.go:59] 12 kube-system pods found
	I0717 10:54:14.669920    4493 system_pods.go:61] "coredns-7db6d8ff4d-nlwxm" [d9e6c103-3eba-4549-b327-23c87ce480cd] Running
	I0717 10:54:14.669923    4493 system_pods.go:61] "etcd-multinode-875000" [b181608e-80a7-4ef3-9702-315fe76bc83b] Running
	I0717 10:54:14.669926    4493 system_pods.go:61] "kindnet-fnltt" [31c26a51-23d0-4f20-a716-fbe77e2d1347] Running
	I0717 10:54:14.669928    4493 system_pods.go:61] "kindnet-hwkds" [41b256d2-0784-4ebc-82a6-1d435f44924e] Running
	I0717 10:54:14.669931    4493 system_pods.go:61] "kindnet-pj9kh" [fd101f4e-0ee3-45fa-b5ed-0957fb0c87f5] Running
	I0717 10:54:14.669933    4493 system_pods.go:61] "kube-apiserver-multinode-875000" [994530a7-11e7-4b05-95ec-c77751a6c24d] Running
	I0717 10:54:14.669936    4493 system_pods.go:61] "kube-controller-manager-multinode-875000" [10a5876c-ddf6-4f37-82ca-96ea7ebde028] Running
	I0717 10:54:14.669939    4493 system_pods.go:61] "kube-proxy-dnn4j" [fd7faf4d-f212-4c89-9ac5-8e408c295411] Running
	I0717 10:54:14.669941    4493 system_pods.go:61] "kube-proxy-tp2zz" [9fda8ef7-b324-4cbb-a8d9-98f93132b2e7] Running
	I0717 10:54:14.669943    4493 system_pods.go:61] "kube-proxy-zs8f8" [9e2bce56-d9e0-42a1-a265-4aab3577b031] Running
	I0717 10:54:14.669946    4493 system_pods.go:61] "kube-scheduler-multinode-875000" [b2f1c23d-635b-490e-a964-c28e1566ead0] Running
	I0717 10:54:14.669949    4493 system_pods.go:61] "storage-provisioner" [2bf95484-4db9-4dc1-80b0-b4a35569c9af] Running
	I0717 10:54:14.669953    4493 system_pods.go:74] duration metric: took 185.321479ms to wait for pod list to return data ...
	I0717 10:54:14.669958    4493 default_sa.go:34] waiting for default service account to be created ...
	I0717 10:54:14.863864    4493 request.go:629] Waited for 193.778992ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/default/serviceaccounts
	I0717 10:54:14.863916    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/default/serviceaccounts
	I0717 10:54:14.863925    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:14.863936    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:14.863945    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:14.866362    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:14.866377    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:14.866385    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:14.866389    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:14.866393    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:14.866398    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:14.866402    4493 round_trippers.go:580]     Content-Length: 261
	I0717 10:54:14.866407    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:14 GMT
	I0717 10:54:14.866411    4493 round_trippers.go:580]     Audit-Id: 2c02ca95-a798-4ded-8a08-3ad5eb3f92db
	I0717 10:54:14.866426    4493 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"900"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"beced86b-963a-4d04-b8e2-f402ded37dee","resourceVersion":"334","creationTimestamp":"2024-07-17T17:50:04Z"}}]}
	I0717 10:54:14.866566    4493 default_sa.go:45] found service account: "default"
	I0717 10:54:14.866579    4493 default_sa.go:55] duration metric: took 196.609666ms for default service account to be created ...
	I0717 10:54:14.866586    4493 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 10:54:15.065542    4493 request.go:629] Waited for 198.888032ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods
	I0717 10:54:15.065691    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods
	I0717 10:54:15.065703    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:15.065714    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:15.065720    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:15.069666    4493 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:54:15.069681    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:15.069688    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:15.069692    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:15 GMT
	I0717 10:54:15.069697    4493 round_trippers.go:580]     Audit-Id: f02a4964-b8b8-451f-95cc-d7d65087f49f
	I0717 10:54:15.069702    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:15.069706    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:15.069710    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:15.070314    4493 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"900"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"895","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85985 chars]
	I0717 10:54:15.072120    4493 system_pods.go:86] 12 kube-system pods found
	I0717 10:54:15.072131    4493 system_pods.go:89] "coredns-7db6d8ff4d-nlwxm" [d9e6c103-3eba-4549-b327-23c87ce480cd] Running
	I0717 10:54:15.072135    4493 system_pods.go:89] "etcd-multinode-875000" [b181608e-80a7-4ef3-9702-315fe76bc83b] Running
	I0717 10:54:15.072139    4493 system_pods.go:89] "kindnet-fnltt" [31c26a51-23d0-4f20-a716-fbe77e2d1347] Running
	I0717 10:54:15.072142    4493 system_pods.go:89] "kindnet-hwkds" [41b256d2-0784-4ebc-82a6-1d435f44924e] Running
	I0717 10:54:15.072145    4493 system_pods.go:89] "kindnet-pj9kh" [fd101f4e-0ee3-45fa-b5ed-0957fb0c87f5] Running
	I0717 10:54:15.072148    4493 system_pods.go:89] "kube-apiserver-multinode-875000" [994530a7-11e7-4b05-95ec-c77751a6c24d] Running
	I0717 10:54:15.072152    4493 system_pods.go:89] "kube-controller-manager-multinode-875000" [10a5876c-ddf6-4f37-82ca-96ea7ebde028] Running
	I0717 10:54:15.072156    4493 system_pods.go:89] "kube-proxy-dnn4j" [fd7faf4d-f212-4c89-9ac5-8e408c295411] Running
	I0717 10:54:15.072159    4493 system_pods.go:89] "kube-proxy-tp2zz" [9fda8ef7-b324-4cbb-a8d9-98f93132b2e7] Running
	I0717 10:54:15.072162    4493 system_pods.go:89] "kube-proxy-zs8f8" [9e2bce56-d9e0-42a1-a265-4aab3577b031] Running
	I0717 10:54:15.072167    4493 system_pods.go:89] "kube-scheduler-multinode-875000" [b2f1c23d-635b-490e-a964-c28e1566ead0] Running
	I0717 10:54:15.072170    4493 system_pods.go:89] "storage-provisioner" [2bf95484-4db9-4dc1-80b0-b4a35569c9af] Running
	I0717 10:54:15.072175    4493 system_pods.go:126] duration metric: took 205.57941ms to wait for k8s-apps to be running ...
	I0717 10:54:15.072185    4493 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 10:54:15.072235    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:54:15.084233    4493 system_svc.go:56] duration metric: took 12.047019ms WaitForService to wait for kubelet
	I0717 10:54:15.084251    4493 kubeadm.go:582] duration metric: took 15.5978861s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:54:15.084263    4493 node_conditions.go:102] verifying NodePressure condition ...
	I0717 10:54:15.263938    4493 request.go:629] Waited for 179.547286ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes
	I0717 10:54:15.263981    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes
	I0717 10:54:15.263989    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:15.264006    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:15.264015    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:15.266530    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:15.266542    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:15.266548    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:15 GMT
	I0717 10:54:15.266552    4493 round_trippers.go:580]     Audit-Id: 78d29157-5004-4eb6-a99e-8177f6794cd0
	I0717 10:54:15.266556    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:15.266560    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:15.266564    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:15.266568    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:15.266832    4493 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"900"},"items":[{"metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14675 chars]
	I0717 10:54:15.267346    4493 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:54:15.267358    4493 node_conditions.go:123] node cpu capacity is 2
	I0717 10:54:15.267367    4493 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:54:15.267372    4493 node_conditions.go:123] node cpu capacity is 2
	I0717 10:54:15.267379    4493 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:54:15.267382    4493 node_conditions.go:123] node cpu capacity is 2
	I0717 10:54:15.267387    4493 node_conditions.go:105] duration metric: took 183.115174ms to run NodePressure ...
	I0717 10:54:15.267398    4493 start.go:241] waiting for startup goroutines ...
	I0717 10:54:15.267406    4493 start.go:246] waiting for cluster config update ...
	I0717 10:54:15.267414    4493 start.go:255] writing updated cluster config ...
	I0717 10:54:15.288493    4493 out.go:177] 
	I0717 10:54:15.310166    4493 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:54:15.310253    4493 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/config.json ...
	I0717 10:54:15.332942    4493 out.go:177] * Starting "multinode-875000-m02" worker node in "multinode-875000" cluster
	I0717 10:54:15.374937    4493 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:54:15.374972    4493 cache.go:56] Caching tarball of preloaded images
	I0717 10:54:15.375171    4493 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:54:15.375189    4493 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:54:15.375313    4493 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/config.json ...
	I0717 10:54:15.376428    4493 start.go:360] acquireMachinesLock for multinode-875000-m02: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:54:15.376547    4493 start.go:364] duration metric: took 98.697µs to acquireMachinesLock for "multinode-875000-m02"
	I0717 10:54:15.376565    4493 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:54:15.376571    4493 fix.go:54] fixHost starting: m02
	I0717 10:54:15.376903    4493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:54:15.376927    4493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:54:15.385815    4493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53197
	I0717 10:54:15.386155    4493 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:54:15.386521    4493 main.go:141] libmachine: Using API Version  1
	I0717 10:54:15.386536    4493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:54:15.386770    4493 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:54:15.386894    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	I0717 10:54:15.386981    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetState
	I0717 10:54:15.387057    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:54:15.387155    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | hyperkit pid from json: 4164
	I0717 10:54:15.388061    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | hyperkit pid 4164 missing from process table
	I0717 10:54:15.388095    4493 fix.go:112] recreateIfNeeded on multinode-875000-m02: state=Stopped err=<nil>
	I0717 10:54:15.388108    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	W0717 10:54:15.388190    4493 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:54:15.409078    4493 out.go:177] * Restarting existing hyperkit VM for "multinode-875000-m02" ...
	I0717 10:54:15.452099    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .Start
	I0717 10:54:15.452333    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:54:15.452363    4493 main.go:141] libmachine: (multinode-875000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/hyperkit.pid
	I0717 10:54:15.453684    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | hyperkit pid 4164 missing from process table
	I0717 10:54:15.453700    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | pid 4164 is in state "Stopped"
	I0717 10:54:15.453720    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/hyperkit.pid...
	I0717 10:54:15.453950    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | Using UUID 25304374-eb81-4156-982c-d8f8ac747f78
	I0717 10:54:15.478721    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | Generated MAC de:84:ef:f1:8f:c7
	I0717 10:54:15.478745    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-875000
	I0717 10:54:15.478878    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"25304374-eb81-4156-982c-d8f8ac747f78", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aad20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0717 10:54:15.478921    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"25304374-eb81-4156-982c-d8f8ac747f78", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aad20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0717 10:54:15.478968    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "25304374-eb81-4156-982c-d8f8ac747f78", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/multinode-875000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/bzimage,/Users/j
enkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-875000"}
	I0717 10:54:15.479012    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 25304374-eb81-4156-982c-d8f8ac747f78 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/multinode-875000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/mult
inode-875000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-875000"
	I0717 10:54:15.479039    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:54:15.480395    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 DEBUG: hyperkit: Pid is 4537
	I0717 10:54:15.480831    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | Attempt 0
	I0717 10:54:15.480842    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:54:15.480973    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | hyperkit pid from json: 4537
	I0717 10:54:15.482820    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | Searching for de:84:ef:f1:8f:c7 in /var/db/dhcpd_leases ...
	I0717 10:54:15.482892    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | Found 16 entries in /var/db/dhcpd_leases!
	I0717 10:54:15.482929    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:c1:c6:6d:b5:4e ID:1,92:c1:c6:6d:b5:4e Lease:0x6699568d}
	I0717 10:54:15.482950    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:a2:dd:4c:c6:bd:14 ID:1,a2:dd:4c:c6:bd:14 Lease:0x669804f2}
	I0717 10:54:15.482968    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:de:84:ef:f1:8f:c7 ID:1,de:84:ef:f1:8f:c7 Lease:0x669955e8}
	I0717 10:54:15.482977    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | Found match: de:84:ef:f1:8f:c7
	I0717 10:54:15.482986    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | IP: 192.169.0.16
	I0717 10:54:15.482992    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetConfigRaw
	I0717 10:54:15.483656    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetIP
	I0717 10:54:15.483904    4493 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/config.json ...
	I0717 10:54:15.484413    4493 machine.go:94] provisionDockerMachine start ...
	I0717 10:54:15.484424    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	I0717 10:54:15.484552    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:15.484671    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:15.484775    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:15.484875    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:15.484962    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:15.485082    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:54:15.485237    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0717 10:54:15.485246    4493 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:54:15.488026    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:54:15.496175    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:54:15.497553    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:54:15.497569    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:54:15.497580    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:54:15.497589    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:54:15.879013    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:54:15.879027    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:54:15.993844    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:54:15.993862    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:54:15.993871    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:54:15.993879    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:54:15.994681    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:54:15.994690    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:54:21.256495    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:21 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:54:21.256562    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:21 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:54:21.256573    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:21 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:54:21.280111    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:21 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:54:50.547294    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:54:50.547311    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetMachineName
	I0717 10:54:50.547440    4493 buildroot.go:166] provisioning hostname "multinode-875000-m02"
	I0717 10:54:50.547452    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetMachineName
	I0717 10:54:50.547549    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:50.547628    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:50.547725    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.547806    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.547894    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:50.548021    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:54:50.548160    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0717 10:54:50.548168    4493 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-875000-m02 && echo "multinode-875000-m02" | sudo tee /etc/hostname
	I0717 10:54:50.608395    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-875000-m02
	
	I0717 10:54:50.608420    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:50.608546    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:50.608639    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.608717    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.608801    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:50.608944    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:54:50.609098    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0717 10:54:50.609110    4493 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-875000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-875000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-875000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:54:50.667332    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:54:50.667354    4493 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:54:50.667369    4493 buildroot.go:174] setting up certificates
	I0717 10:54:50.667375    4493 provision.go:84] configureAuth start
	I0717 10:54:50.667383    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetMachineName
	I0717 10:54:50.667509    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetIP
	I0717 10:54:50.667618    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:50.667712    4493 provision.go:143] copyHostCerts
	I0717 10:54:50.667740    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:54:50.667790    4493 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:54:50.667796    4493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:54:50.668026    4493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:54:50.668269    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:54:50.668302    4493 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:54:50.668307    4493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:54:50.668430    4493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:54:50.668592    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:54:50.668623    4493 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:54:50.668628    4493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:54:50.668734    4493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:54:50.668897    4493 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.multinode-875000-m02 san=[127.0.0.1 192.169.0.16 localhost minikube multinode-875000-m02]
	I0717 10:54:50.772544    4493 provision.go:177] copyRemoteCerts
	I0717 10:54:50.772596    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:54:50.772612    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:50.772743    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:50.772842    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.772925    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:50.773001    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/id_rsa Username:docker}
	I0717 10:54:50.805428    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:54:50.805497    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:54:50.825423    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:54:50.825506    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 10:54:50.844675    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:54:50.844753    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:54:50.863692    4493 provision.go:87] duration metric: took 196.298177ms to configureAuth
	I0717 10:54:50.863710    4493 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:54:50.863892    4493 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:54:50.863923    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	I0717 10:54:50.864047    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:50.864143    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:50.864236    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.864315    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.864395    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:50.864501    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:54:50.864627    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0717 10:54:50.864635    4493 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:54:50.915603    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:54:50.915614    4493 buildroot.go:70] root file system type: tmpfs
	I0717 10:54:50.915694    4493 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:54:50.915704    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:50.915827    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:50.915913    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.915995    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.916077    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:50.916206    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:54:50.916351    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0717 10:54:50.916397    4493 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.15"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:54:50.976652    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.15
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:54:50.976670    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:50.976806    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:50.976915    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.977036    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.977129    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:50.977262    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:54:50.977409    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0717 10:54:50.977423    4493 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:54:52.540317    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:54:52.540331    4493 machine.go:97] duration metric: took 37.054917264s to provisionDockerMachine
	I0717 10:54:52.540340    4493 start.go:293] postStartSetup for "multinode-875000-m02" (driver="hyperkit")
	I0717 10:54:52.540349    4493 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:54:52.540359    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	I0717 10:54:52.540544    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:54:52.540556    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:52.540638    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:52.540730    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:52.540832    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:52.540909    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/id_rsa Username:docker}
	I0717 10:54:52.572875    4493 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:54:52.575767    4493 command_runner.go:130] > NAME=Buildroot
	I0717 10:54:52.575777    4493 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0717 10:54:52.575781    4493 command_runner.go:130] > ID=buildroot
	I0717 10:54:52.575784    4493 command_runner.go:130] > VERSION_ID=2023.02.9
	I0717 10:54:52.575788    4493 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0717 10:54:52.575851    4493 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:54:52.575861    4493 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:54:52.575959    4493 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:54:52.576150    4493 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:54:52.576156    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:54:52.576307    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:54:52.584281    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:54:52.603233    4493 start.go:296] duration metric: took 62.881414ms for postStartSetup
	I0717 10:54:52.603253    4493 fix.go:56] duration metric: took 37.225684715s for fixHost
	I0717 10:54:52.603269    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:52.603398    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:52.603486    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:52.603575    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:52.603658    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:52.603779    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:54:52.603916    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0717 10:54:52.603923    4493 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 10:54:52.654022    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721238892.728031416
	
	I0717 10:54:52.654033    4493 fix.go:216] guest clock: 1721238892.728031416
	I0717 10:54:52.654038    4493 fix.go:229] Guest: 2024-07-17 10:54:52.728031416 -0700 PDT Remote: 2024-07-17 10:54:52.603259 -0700 PDT m=+104.196631818 (delta=124.772416ms)
	I0717 10:54:52.654052    4493 fix.go:200] guest clock delta is within tolerance: 124.772416ms
	I0717 10:54:52.654056    4493 start.go:83] releasing machines lock for "multinode-875000-m02", held for 37.276502003s
	I0717 10:54:52.654073    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	I0717 10:54:52.654220    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetIP
	I0717 10:54:52.677512    4493 out.go:177] * Found network options:
	I0717 10:54:52.719533    4493 out.go:177]   - NO_PROXY=192.169.0.15
	W0717 10:54:52.740505    4493 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:54:52.740530    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	I0717 10:54:52.740996    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	I0717 10:54:52.741124    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	I0717 10:54:52.741208    4493 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:54:52.741230    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	W0717 10:54:52.741259    4493 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:54:52.741332    4493 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:54:52.741345    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:52.741356    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:52.741448    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:52.741475    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:52.741545    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:52.741585    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:52.741632    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/id_rsa Username:docker}
	I0717 10:54:52.741667    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:52.741763    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/id_rsa Username:docker}
	I0717 10:54:52.770803    4493 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 10:54:52.770858    4493 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:54:52.770915    4493 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:54:52.818078    4493 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 10:54:52.818448    4493 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0717 10:54:52.818466    4493 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:54:52.818473    4493 start.go:495] detecting cgroup driver to use...
	I0717 10:54:52.818539    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:54:52.833779    4493 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0717 10:54:52.834101    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:54:52.843741    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:54:52.852983    4493 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:54:52.853036    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:54:52.862047    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:54:52.871203    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:54:52.880044    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:54:52.889152    4493 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:54:52.898259    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:54:52.906974    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:54:52.915724    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:54:52.924512    4493 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:54:52.932801    4493 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 10:54:52.932864    4493 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:54:52.941242    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:54:53.038142    4493 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:54:53.056643    4493 start.go:495] detecting cgroup driver to use...
	I0717 10:54:53.056711    4493 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:54:53.073759    4493 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0717 10:54:53.075346    4493 command_runner.go:130] > [Unit]
	I0717 10:54:53.075355    4493 command_runner.go:130] > Description=Docker Application Container Engine
	I0717 10:54:53.075360    4493 command_runner.go:130] > Documentation=https://docs.docker.com
	I0717 10:54:53.075369    4493 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0717 10:54:53.075375    4493 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0717 10:54:53.075379    4493 command_runner.go:130] > StartLimitBurst=3
	I0717 10:54:53.075383    4493 command_runner.go:130] > StartLimitIntervalSec=60
	I0717 10:54:53.075390    4493 command_runner.go:130] > [Service]
	I0717 10:54:53.075393    4493 command_runner.go:130] > Type=notify
	I0717 10:54:53.075397    4493 command_runner.go:130] > Restart=on-failure
	I0717 10:54:53.075401    4493 command_runner.go:130] > Environment=NO_PROXY=192.169.0.15
	I0717 10:54:53.075407    4493 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0717 10:54:53.075415    4493 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0717 10:54:53.075421    4493 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0717 10:54:53.075427    4493 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0717 10:54:53.075433    4493 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0717 10:54:53.075438    4493 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0717 10:54:53.075444    4493 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0717 10:54:53.075457    4493 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0717 10:54:53.075463    4493 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0717 10:54:53.075467    4493 command_runner.go:130] > ExecStart=
	I0717 10:54:53.075478    4493 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0717 10:54:53.075484    4493 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0717 10:54:53.075490    4493 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0717 10:54:53.075495    4493 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0717 10:54:53.075499    4493 command_runner.go:130] > LimitNOFILE=infinity
	I0717 10:54:53.075503    4493 command_runner.go:130] > LimitNPROC=infinity
	I0717 10:54:53.075512    4493 command_runner.go:130] > LimitCORE=infinity
	I0717 10:54:53.075517    4493 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0717 10:54:53.075521    4493 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0717 10:54:53.075525    4493 command_runner.go:130] > TasksMax=infinity
	I0717 10:54:53.075529    4493 command_runner.go:130] > TimeoutStartSec=0
	I0717 10:54:53.075534    4493 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0717 10:54:53.075538    4493 command_runner.go:130] > Delegate=yes
	I0717 10:54:53.075542    4493 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0717 10:54:53.075550    4493 command_runner.go:130] > KillMode=process
	I0717 10:54:53.075555    4493 command_runner.go:130] > [Install]
	I0717 10:54:53.075559    4493 command_runner.go:130] > WantedBy=multi-user.target
	I0717 10:54:53.075672    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:54:53.087097    4493 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:54:53.104469    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:54:53.115846    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:54:53.126912    4493 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:54:53.147236    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:54:53.158538    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:54:53.173393    4493 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0717 10:54:53.173643    4493 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:54:53.176317    4493 command_runner.go:130] > /usr/bin/cri-dockerd
	I0717 10:54:53.176498    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:54:53.184492    4493 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:54:53.197780    4493 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:54:53.296195    4493 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:54:53.414543    4493 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:54:53.414564    4493 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:54:53.428402    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:54:53.522036    4493 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:54:55.814835    4493 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.292717825s)
	I0717 10:54:55.814898    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:54:55.825345    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:54:55.835555    4493 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:54:55.928559    4493 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:54:56.021035    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:54:56.124129    4493 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:54:56.137843    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:54:56.149036    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:54:56.250690    4493 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:54:56.306186    4493 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:54:56.306260    4493 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:54:56.312094    4493 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0717 10:54:56.312108    4493 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 10:54:56.312113    4493 command_runner.go:130] > Device: 0,22	Inode: 774         Links: 1
	I0717 10:54:56.312119    4493 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0717 10:54:56.312123    4493 command_runner.go:130] > Access: 2024-07-17 17:54:56.337734451 +0000
	I0717 10:54:56.312133    4493 command_runner.go:130] > Modify: 2024-07-17 17:54:56.337734451 +0000
	I0717 10:54:56.312138    4493 command_runner.go:130] > Change: 2024-07-17 17:54:56.339734451 +0000
	I0717 10:54:56.312141    4493 command_runner.go:130] >  Birth: -
	I0717 10:54:56.312293    4493 start.go:563] Will wait 60s for crictl version
	I0717 10:54:56.312346    4493 ssh_runner.go:195] Run: which crictl
	I0717 10:54:56.315353    4493 command_runner.go:130] > /usr/bin/crictl
	I0717 10:54:56.315462    4493 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:54:56.343066    4493 command_runner.go:130] > Version:  0.1.0
	I0717 10:54:56.343082    4493 command_runner.go:130] > RuntimeName:  docker
	I0717 10:54:56.343089    4493 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0717 10:54:56.343093    4493 command_runner.go:130] > RuntimeApiVersion:  v1
	I0717 10:54:56.343140    4493 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:54:56.343208    4493 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:54:56.360434    4493 command_runner.go:130] > 27.0.3
	I0717 10:54:56.361492    4493 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:54:56.378143    4493 command_runner.go:130] > 27.0.3
	I0717 10:54:56.401360    4493 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:54:56.421398    4493 out.go:177]   - env NO_PROXY=192.169.0.15
	I0717 10:54:56.442562    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetIP
	I0717 10:54:56.442948    4493 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:54:56.447840    4493 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:54:56.457210    4493 mustload.go:65] Loading cluster: multinode-875000
	I0717 10:54:56.457386    4493 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:54:56.457622    4493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:54:56.457644    4493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:54:56.466316    4493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53218
	I0717 10:54:56.466820    4493 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:54:56.467157    4493 main.go:141] libmachine: Using API Version  1
	I0717 10:54:56.467169    4493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:54:56.467393    4493 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:54:56.467497    4493 main.go:141] libmachine: (multinode-875000) Calling .GetState
	I0717 10:54:56.467585    4493 main.go:141] libmachine: (multinode-875000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:54:56.467673    4493 main.go:141] libmachine: (multinode-875000) DBG | hyperkit pid from json: 4506
	I0717 10:54:56.468609    4493 host.go:66] Checking if "multinode-875000" exists ...
	I0717 10:54:56.468870    4493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:54:56.468894    4493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:54:56.477263    4493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53220
	I0717 10:54:56.477587    4493 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:54:56.477971    4493 main.go:141] libmachine: Using API Version  1
	I0717 10:54:56.477987    4493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:54:56.478209    4493 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:54:56.478326    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:54:56.478414    4493 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000 for IP: 192.169.0.16
	I0717 10:54:56.478420    4493 certs.go:194] generating shared ca certs ...
	I0717 10:54:56.478433    4493 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:54:56.478579    4493 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:54:56.478638    4493 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:54:56.478648    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:54:56.478673    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:54:56.478692    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:54:56.478710    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:54:56.478796    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:54:56.478835    4493 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:54:56.478845    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:54:56.478883    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:54:56.478919    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:54:56.478951    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:54:56.479022    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:54:56.479056    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:54:56.479078    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:54:56.479096    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:54:56.479119    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:54:56.499479    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:54:56.520218    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:54:56.539892    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:54:56.561151    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:54:56.580936    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:54:56.600650    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:54:56.620542    4493 ssh_runner.go:195] Run: openssl version
	I0717 10:54:56.624552    4493 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0717 10:54:56.624757    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:54:56.632959    4493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:54:56.636184    4493 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:54:56.636381    4493 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:54:56.636425    4493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:54:56.640335    4493 command_runner.go:130] > 3ec20f2e
	I0717 10:54:56.640548    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:54:56.648681    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:54:56.656838    4493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:54:56.660147    4493 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:54:56.660234    4493 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:54:56.660270    4493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:54:56.664305    4493 command_runner.go:130] > b5213941
	I0717 10:54:56.664449    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:54:56.672888    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:54:56.681217    4493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:54:56.684479    4493 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:54:56.684605    4493 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:54:56.684642    4493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:54:56.688668    4493 command_runner.go:130] > 51391683
	I0717 10:54:56.688809    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:54:56.697420    4493 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:54:56.700396    4493 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 10:54:56.700532    4493 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 10:54:56.700566    4493 kubeadm.go:934] updating node {m02 192.169.0.16 8443 v1.30.2 docker false true} ...
	I0717 10:54:56.700619    4493 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-875000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:54:56.700661    4493 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:54:56.707687    4493 command_runner.go:130] > kubeadm
	I0717 10:54:56.707698    4493 command_runner.go:130] > kubectl
	I0717 10:54:56.707701    4493 command_runner.go:130] > kubelet
	I0717 10:54:56.707712    4493 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:54:56.707752    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0717 10:54:56.714963    4493 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0717 10:54:56.728389    4493 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:54:56.741770    4493 ssh_runner.go:195] Run: grep 192.169.0.15	control-plane.minikube.internal$ /etc/hosts
	I0717 10:54:56.744668    4493 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:54:56.754021    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:54:56.845666    4493 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:54:56.860725    4493 host.go:66] Checking if "multinode-875000" exists ...
	I0717 10:54:56.861012    4493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:54:56.861037    4493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:54:56.869837    4493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53222
	I0717 10:54:56.870195    4493 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:54:56.870563    4493 main.go:141] libmachine: Using API Version  1
	I0717 10:54:56.870576    4493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:54:56.870787    4493 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:54:56.870902    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:54:56.871001    4493 start.go:317] joinCluster: &{Name:multinode-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.15 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.17 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:54:56.871094    4493 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0717 10:54:56.871120    4493 host.go:66] Checking if "multinode-875000-m02" exists ...
	I0717 10:54:56.871394    4493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:54:56.871421    4493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:54:56.880400    4493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53224
	I0717 10:54:56.880751    4493 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:54:56.881110    4493 main.go:141] libmachine: Using API Version  1
	I0717 10:54:56.881127    4493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:54:56.881441    4493 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:54:56.881593    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	I0717 10:54:56.881682    4493 mustload.go:65] Loading cluster: multinode-875000
	I0717 10:54:56.881867    4493 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:54:56.882088    4493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:54:56.882112    4493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:54:56.890925    4493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53226
	I0717 10:54:56.891286    4493 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:54:56.891611    4493 main.go:141] libmachine: Using API Version  1
	I0717 10:54:56.891627    4493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:54:56.891830    4493 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:54:56.891949    4493 main.go:141] libmachine: (multinode-875000) Calling .GetState
	I0717 10:54:56.892027    4493 main.go:141] libmachine: (multinode-875000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:54:56.892105    4493 main.go:141] libmachine: (multinode-875000) DBG | hyperkit pid from json: 4506
	I0717 10:54:56.893081    4493 host.go:66] Checking if "multinode-875000" exists ...
	I0717 10:54:56.893356    4493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:54:56.893379    4493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:54:56.902259    4493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53228
	I0717 10:54:56.902618    4493 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:54:56.902942    4493 main.go:141] libmachine: Using API Version  1
	I0717 10:54:56.902953    4493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:54:56.903151    4493 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:54:56.903258    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:54:56.903348    4493 api_server.go:166] Checking apiserver status ...
	I0717 10:54:56.903400    4493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:54:56.903410    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:54:56.903490    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:54:56.903570    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:54:56.903659    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:54:56.903737    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/id_rsa Username:docker}
	I0717 10:54:56.942665    4493 command_runner.go:130] > 1676
	I0717 10:54:56.942766    4493 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1676/cgroup
	W0717 10:54:56.951065    4493 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1676/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 10:54:56.951135    4493 ssh_runner.go:195] Run: ls
	I0717 10:54:56.954491    4493 api_server.go:253] Checking apiserver healthz at https://192.169.0.15:8443/healthz ...
	I0717 10:54:56.958186    4493 api_server.go:279] https://192.169.0.15:8443/healthz returned 200:
	ok
	I0717 10:54:56.958243    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl drain multinode-875000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0717 10:54:57.041498    4493 command_runner.go:130] > node/multinode-875000-m02 cordoned
	I0717 10:55:00.062461    4493 command_runner.go:130] > pod "busybox-fc5497c4f-sp4jf" has DeletionTimestamp older than 1 seconds, skipping
	I0717 10:55:00.062482    4493 command_runner.go:130] > node/multinode-875000-m02 drained
	I0717 10:55:00.064322    4493 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-pj9kh, kube-system/kube-proxy-tp2zz
	I0717 10:55:00.064403    4493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl drain multinode-875000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.10605879s)
	I0717 10:55:00.064418    4493 node.go:128] successfully drained node "multinode-875000-m02"
	I0717 10:55:00.064443    4493 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0717 10:55:00.064478    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:55:00.064611    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:55:00.064706    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:55:00.064802    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:55:00.064885    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/id_rsa Username:docker}
	I0717 10:55:00.146148    4493 command_runner.go:130] > [preflight] Running pre-flight checks
	I0717 10:55:00.146319    4493 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0717 10:55:00.146328    4493 command_runner.go:130] > [reset] Stopping the kubelet service
	I0717 10:55:00.153055    4493 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0717 10:55:00.362034    4493 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0717 10:55:00.363625    4493 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0717 10:55:00.363636    4493 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0717 10:55:00.363645    4493 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0717 10:55:00.363652    4493 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0717 10:55:00.363658    4493 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0717 10:55:00.363662    4493 command_runner.go:130] > to reset your system's IPVS tables.
	I0717 10:55:00.363667    4493 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0717 10:55:00.363678    4493 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0717 10:55:00.364432    4493 command_runner.go:130] ! W0717 17:55:00.225625    1261 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0717 10:55:00.364457    4493 command_runner.go:130] ! W0717 17:55:00.441414    1261 cleanupnode.go:106] [reset] Failed to remove containers: failed to stop running pod 0feb0b072ced7ae109f1a463a2def851272dd796646878c46af448aa0c69e0be: output: E0717 17:55:00.346667    1290 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-sp4jf_default\" network: cni config uninitialized" podSandboxID="0feb0b072ced7ae109f1a463a2def851272dd796646878c46af448aa0c69e0be"
	I0717 10:55:00.364470    4493 command_runner.go:130] ! time="2024-07-17T17:55:00Z" level=fatal msg="stopping the pod sandbox \"0feb0b072ced7ae109f1a463a2def851272dd796646878c46af448aa0c69e0be\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-sp4jf_default\" network: cni config uninitialized"
	I0717 10:55:00.364479    4493 command_runner.go:130] ! : exit status 1
	I0717 10:55:00.364491    4493 node.go:155] successfully reset node "multinode-875000-m02"
	I0717 10:55:00.364766    4493 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:55:00.365012    4493 kapi.go:59] client config for multinode-875000: &rest.Config{Host:"https://192.169.0.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xeec6b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 10:55:00.365282    4493 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0717 10:55:00.365311    4493 round_trippers.go:463] DELETE https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:00.365315    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:00.365322    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:00.365325    4493 round_trippers.go:473]     Content-Type: application/json
	I0717 10:55:00.365329    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:00.367975    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:00.367985    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:00.367991    4493 round_trippers.go:580]     Content-Length: 171
	I0717 10:55:00.367994    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:00 GMT
	I0717 10:55:00.368006    4493 round_trippers.go:580]     Audit-Id: 7409137d-ed16-4812-8938-99c2d2747fe9
	I0717 10:55:00.368012    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:00.368014    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:00.368017    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:00.368020    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:00.368030    4493 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-875000-m02","kind":"nodes","uid":"e92886e5-127c-42d8-b0f7-76db7895a433"}}
	I0717 10:55:00.368058    4493 node.go:180] successfully deleted node "multinode-875000-m02"
	I0717 10:55:00.368066    4493 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0717 10:55:00.368088    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 10:55:00.368103    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:55:00.368251    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:55:00.368350    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:55:00.368443    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:55:00.368537    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/id_rsa Username:docker}
	I0717 10:55:00.451521    4493 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 5xpi7v.8qt9i595u32wzn59 --discovery-token-ca-cert-hash sha256:6ede73121e365fd80e9329df76f11084b0ca9769c5610fa08d82ec64ba1ac24d 
	I0717 10:55:00.453483    4493 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0717 10:55:00.453501    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5xpi7v.8qt9i595u32wzn59 --discovery-token-ca-cert-hash sha256:6ede73121e365fd80e9329df76f11084b0ca9769c5610fa08d82ec64ba1ac24d --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-875000-m02"
	I0717 10:55:00.488054    4493 command_runner.go:130] > [preflight] Running pre-flight checks
	I0717 10:55:00.586862    4493 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0717 10:55:00.586883    4493 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0717 10:55:00.619472    4493 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 10:55:00.619554    4493 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 10:55:00.619636    4493 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0717 10:55:00.724010    4493 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 10:55:01.224638    4493 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.102674ms
	I0717 10:55:01.224657    4493 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0717 10:55:01.236511    4493 command_runner.go:130] > This node has joined the cluster:
	I0717 10:55:01.236525    4493 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0717 10:55:01.236530    4493 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0717 10:55:01.236536    4493 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0717 10:55:01.238002    4493 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 10:55:01.238169    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 10:55:01.342452    4493 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0717 10:55:01.442891    4493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-875000-m02 minikube.k8s.io/updated_at=2024_07_17T10_55_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=multinode-875000 minikube.k8s.io/primary=false
	I0717 10:55:01.510313    4493 command_runner.go:130] > node/multinode-875000-m02 labeled
	I0717 10:55:01.510343    4493 start.go:319] duration metric: took 4.639218919s to joinCluster
	I0717 10:55:01.510392    4493 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0717 10:55:01.510566    4493 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:01.533251    4493 out.go:177] * Verifying Kubernetes components...
	I0717 10:55:01.592481    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:55:01.687439    4493 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:55:01.699553    4493 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:55:01.699737    4493 kapi.go:59] client config for multinode-875000: &rest.Config{Host:"https://192.169.0.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xeec6b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 10:55:01.699921    4493 node_ready.go:35] waiting up to 6m0s for node "multinode-875000-m02" to be "Ready" ...
	I0717 10:55:01.699961    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:01.699966    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:01.699972    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:01.699975    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:01.701540    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:01.701553    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:01.701562    4493 round_trippers.go:580]     Audit-Id: d610837f-a903-4595-b821-1ecb3d160396
	I0717 10:55:01.701571    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:01.701594    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:01.701601    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:01.701605    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:01.701608    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:01 GMT
	I0717 10:55:01.701839    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"980","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3563 chars]
	I0717 10:55:02.200205    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:02.200225    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:02.200236    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:02.200241    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:02.202575    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:02.202587    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:02.202594    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:02.202600    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:02.202604    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:02.202610    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:02 GMT
	I0717 10:55:02.202620    4493 round_trippers.go:580]     Audit-Id: ccb73072-2e97-4be1-996d-85722f328eaa
	I0717 10:55:02.202632    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:02.203135    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"980","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3563 chars]
	I0717 10:55:02.700739    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:02.700763    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:02.700774    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:02.700781    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:02.703244    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:02.703259    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:02.703265    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:02.703270    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:02.703274    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:02.703279    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:02.703309    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:02 GMT
	I0717 10:55:02.703320    4493 round_trippers.go:580]     Audit-Id: 05307e3b-9f52-428a-9ee7-31cb89be7343
	I0717 10:55:02.703388    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"980","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3563 chars]
	I0717 10:55:03.200219    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:03.200238    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:03.200244    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:03.200247    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:03.202237    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:03.202251    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:03.202257    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:03 GMT
	I0717 10:55:03.202260    4493 round_trippers.go:580]     Audit-Id: bd11140b-6603-4f0c-b555-8d97e33b2574
	I0717 10:55:03.202264    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:03.202268    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:03.202272    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:03.202276    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:03.202349    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:03.700095    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:03.700113    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:03.700169    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:03.700174    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:03.702595    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:03.702609    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:03.702615    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:03.702617    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:03.702620    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:03.702623    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:03 GMT
	I0717 10:55:03.702625    4493 round_trippers.go:580]     Audit-Id: 53111263-6831-434a-a406-ff2e35a2b89f
	I0717 10:55:03.702628    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:03.702725    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:03.702909    4493 node_ready.go:53] node "multinode-875000-m02" has status "Ready":"False"
	I0717 10:55:04.200135    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:04.200151    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:04.200158    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:04.200162    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:04.201606    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:04.201615    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:04.201620    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:04.201625    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:04.201628    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:04 GMT
	I0717 10:55:04.201631    4493 round_trippers.go:580]     Audit-Id: 8461e4b0-4d8b-4981-ac49-4c0f962bf063
	I0717 10:55:04.201635    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:04.201637    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:04.201723    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:04.700291    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:04.700313    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:04.700324    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:04.700330    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:04.702569    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:04.702584    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:04.702591    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:04.702598    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:04.702602    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:04.702605    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:04 GMT
	I0717 10:55:04.702609    4493 round_trippers.go:580]     Audit-Id: 966e4f03-6e8b-4a3f-9dce-940f1d802dfd
	I0717 10:55:04.702613    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:04.702687    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:05.200269    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:05.200382    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:05.200398    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:05.200421    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:05.203290    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:05.203305    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:05.203313    4493 round_trippers.go:580]     Audit-Id: fb046001-3559-4356-b9ce-d7024ab60ed1
	I0717 10:55:05.203317    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:05.203320    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:05.203324    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:05.203329    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:05.203332    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:05 GMT
	I0717 10:55:05.203412    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:05.700402    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:05.700506    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:05.700522    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:05.700533    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:05.702879    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:05.702892    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:05.702899    4493 round_trippers.go:580]     Audit-Id: c7ed85b0-8fbe-4717-8f42-5e4801ed70d8
	I0717 10:55:05.702925    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:05.702932    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:05.702937    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:05.702942    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:05.702947    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:05 GMT
	I0717 10:55:05.703176    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:05.703400    4493 node_ready.go:53] node "multinode-875000-m02" has status "Ready":"False"
	I0717 10:55:06.200532    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:06.200547    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:06.200556    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:06.200559    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:06.202322    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:06.202335    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:06.202341    4493 round_trippers.go:580]     Audit-Id: c4b5945c-d4ba-468e-aa74-89f74ef67368
	I0717 10:55:06.202344    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:06.202349    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:06.202352    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:06.202356    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:06.202368    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:06 GMT
	I0717 10:55:06.202610    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:06.700376    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:06.700393    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:06.700404    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:06.700410    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:06.703109    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:06.703121    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:06.703128    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:06.703133    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:06.703137    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:06 GMT
	I0717 10:55:06.703141    4493 round_trippers.go:580]     Audit-Id: d9117cca-976b-42dd-bf76-ea2d62050fb5
	I0717 10:55:06.703146    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:06.703149    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:06.703625    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:07.200703    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:07.200725    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:07.200736    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:07.200743    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:07.203081    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:07.203094    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:07.203101    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:07.203106    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:07 GMT
	I0717 10:55:07.203110    4493 round_trippers.go:580]     Audit-Id: 6d1174d1-8eb9-47c5-894e-5597178454de
	I0717 10:55:07.203122    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:07.203127    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:07.203133    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:07.203410    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:07.702052    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:07.702122    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:07.702135    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:07.702142    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:07.704618    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:07.704630    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:07.704638    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:07.704646    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:07.704649    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:07.704653    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:07.704657    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:07 GMT
	I0717 10:55:07.704661    4493 round_trippers.go:580]     Audit-Id: 520f58b3-f649-4814-a04e-8e8d393a90be
	I0717 10:55:07.704718    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:07.704939    4493 node_ready.go:53] node "multinode-875000-m02" has status "Ready":"False"
	I0717 10:55:08.201539    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:08.201560    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:08.201572    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:08.201577    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:08.204001    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:08.204016    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:08.204023    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:08.204028    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:08.204056    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:08.204063    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:08 GMT
	I0717 10:55:08.204068    4493 round_trippers.go:580]     Audit-Id: 01e4b9d6-8768-44a4-bdb9-614da75a5859
	I0717 10:55:08.204071    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:08.204144    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:08.700307    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:08.700322    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:08.700330    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:08.700336    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:08.702138    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:08.702150    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:08.702155    4493 round_trippers.go:580]     Audit-Id: 6b587bbd-1dd4-42b4-9106-21df5494a268
	I0717 10:55:08.702159    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:08.702161    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:08.702164    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:08.702167    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:08.702169    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:08 GMT
	I0717 10:55:08.702275    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:09.200553    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:09.200583    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:09.200595    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:09.200601    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:09.203493    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:09.203513    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:09.203523    4493 round_trippers.go:580]     Audit-Id: bbe0d358-4712-4872-802a-e6a8cee28ec6
	I0717 10:55:09.203530    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:09.203536    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:09.203541    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:09.203548    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:09.203553    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:09 GMT
	I0717 10:55:09.203687    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:09.701289    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:09.701312    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:09.701323    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:09.701329    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:09.704307    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:09.704328    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:09.704336    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:09.704341    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:09.704344    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:09.704364    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:09.704375    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:09 GMT
	I0717 10:55:09.704382    4493 round_trippers.go:580]     Audit-Id: 43108773-3b2c-43b9-a2fc-a67e253c4276
	I0717 10:55:09.704722    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:10.201373    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:10.201396    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:10.201408    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:10.201416    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:10.203944    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:10.203960    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:10.203970    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:10.203975    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:10.203979    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:10.203982    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:10 GMT
	I0717 10:55:10.203985    4493 round_trippers.go:580]     Audit-Id: 8334ea88-80fe-481f-bfc2-3bdf8e5007a0
	I0717 10:55:10.203988    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:10.204137    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:10.204365    4493 node_ready.go:53] node "multinode-875000-m02" has status "Ready":"False"
	I0717 10:55:10.701655    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:10.701678    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:10.701689    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:10.701696    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:10.704152    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:10.704169    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:10.704177    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:10.704181    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:10.704193    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:10 GMT
	I0717 10:55:10.704197    4493 round_trippers.go:580]     Audit-Id: 98f9d28b-0a42-4910-8099-c1c2a1178293
	I0717 10:55:10.704200    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:10.704204    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:10.704466    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:11.200326    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:11.200339    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:11.200345    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:11.200349    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:11.202093    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:11.202104    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:11.202109    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:11.202113    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:11 GMT
	I0717 10:55:11.202118    4493 round_trippers.go:580]     Audit-Id: cb492284-7d75-4f08-8d35-0a3336ca07bf
	I0717 10:55:11.202121    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:11.202125    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:11.202129    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:11.202242    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:11.700801    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:11.700828    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:11.700841    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:11.700846    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:11.703521    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:11.703536    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:11.703557    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:11.703567    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:11.703574    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:11.703582    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:11.703585    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:11 GMT
	I0717 10:55:11.703590    4493 round_trippers.go:580]     Audit-Id: b34c9af3-3583-4eff-8430-2584ab881f5f
	I0717 10:55:11.703833    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1011","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0717 10:55:12.200439    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:12.200460    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:12.200472    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:12.200477    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:12.202777    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:12.202790    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:12.202797    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:12.202811    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:12.202818    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:12.202822    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:12.202842    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:12 GMT
	I0717 10:55:12.202851    4493 round_trippers.go:580]     Audit-Id: 8c615bbd-dc66-4929-b8b1-81662a1a74a9
	I0717 10:55:12.202931    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1011","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0717 10:55:12.701497    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:12.701513    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:12.701522    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:12.701526    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:12.703441    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:12.703459    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:12.703469    4493 round_trippers.go:580]     Audit-Id: fe37677f-8d42-4dcd-a383-1973ea7c9482
	I0717 10:55:12.703478    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:12.703483    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:12.703488    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:12.703495    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:12.703506    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:12 GMT
	I0717 10:55:12.703677    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1011","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0717 10:55:12.703854    4493 node_ready.go:53] node "multinode-875000-m02" has status "Ready":"False"
	I0717 10:55:13.200995    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:13.201016    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:13.201029    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:13.201034    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:13.203629    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:13.203645    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:13.203652    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:13.203658    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:13 GMT
	I0717 10:55:13.203663    4493 round_trippers.go:580]     Audit-Id: b6627e79-75cd-47c9-b51d-de40f1b5842a
	I0717 10:55:13.203666    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:13.203670    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:13.203673    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:13.203772    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1011","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0717 10:55:13.700366    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:13.700380    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:13.700386    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:13.700390    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:13.702030    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:13.702042    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:13.702049    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:13.702052    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:13.702056    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:13.702060    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:13.702065    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:13 GMT
	I0717 10:55:13.702068    4493 round_trippers.go:580]     Audit-Id: 96cc2dbe-d717-407c-8f23-63cfc496bdf3
	I0717 10:55:13.702275    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1011","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0717 10:55:14.201033    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:14.201055    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:14.201066    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:14.201073    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:14.203567    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:14.203579    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:14.203586    4493 round_trippers.go:580]     Audit-Id: 34ad8a68-a477-459a-bd54-aff77356161d
	I0717 10:55:14.203590    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:14.203598    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:14.203602    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:14.203605    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:14.203609    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:14 GMT
	I0717 10:55:14.203837    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1011","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0717 10:55:14.700783    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:14.700809    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:14.700854    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:14.700860    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:14.703359    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:14.703377    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:14.703386    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:14.703395    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:14.703403    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:14 GMT
	I0717 10:55:14.703407    4493 round_trippers.go:580]     Audit-Id: 2d4c051d-a143-4bc8-ab8a-d84f2bf13089
	I0717 10:55:14.703412    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:14.703431    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:14.703552    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1011","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0717 10:55:15.200537    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:15.200560    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:15.200569    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:15.200577    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:15.202608    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:15.202621    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:15.202628    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:15.202632    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:15 GMT
	I0717 10:55:15.202636    4493 round_trippers.go:580]     Audit-Id: c12a4e94-3a66-4478-a6dc-bd28300ef803
	I0717 10:55:15.202644    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:15.202648    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:15.202651    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:15.202819    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1011","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0717 10:55:15.203051    4493 node_ready.go:53] node "multinode-875000-m02" has status "Ready":"False"
	I0717 10:55:15.701393    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:15.701415    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:15.701427    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:15.701433    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:15.704339    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:15.704356    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:15.704363    4493 round_trippers.go:580]     Audit-Id: d52299e1-1abf-452d-afc0-2b3a8c6d1231
	I0717 10:55:15.704367    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:15.704372    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:15.704377    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:15.704380    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:15.704383    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:15 GMT
	I0717 10:55:15.704452    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1011","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0717 10:55:16.201611    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:16.201635    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.201646    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.201652    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.204226    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:16.204241    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.204261    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.204269    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.204276    4493 round_trippers.go:580]     Audit-Id: 47db4f9b-982d-4d64-9a9b-4dd331b514bb
	I0717 10:55:16.204281    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.204286    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.204291    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.204515    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1011","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0717 10:55:16.700597    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:16.700621    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.700701    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.700710    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.726060    4493 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0717 10:55:16.726077    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.726085    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.726090    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.726095    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.726101    4493 round_trippers.go:580]     Audit-Id: 14d733be-9593-4ef6-8fdb-9886cbf78bb5
	I0717 10:55:16.726107    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.726113    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.726316    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1018","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0717 10:55:16.726548    4493 node_ready.go:49] node "multinode-875000-m02" has status "Ready":"True"
	I0717 10:55:16.726559    4493 node_ready.go:38] duration metric: took 15.026225552s for node "multinode-875000-m02" to be "Ready" ...
	I0717 10:55:16.726567    4493 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:55:16.726609    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods
	I0717 10:55:16.726616    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.726624    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.726629    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.729121    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:16.729128    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.729132    4493 round_trippers.go:580]     Audit-Id: 1856a5e9-19d4-4f5b-8032-cb7c4f33d818
	I0717 10:55:16.729137    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.729143    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.729148    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.729153    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.729155    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.730130    4493 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1022"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"895","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 86436 chars]
	I0717 10:55:16.732054    4493 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:16.732094    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nlwxm
	I0717 10:55:16.732098    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.732115    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.732121    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.733261    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:16.733268    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.733273    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.733276    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.733281    4493 round_trippers.go:580]     Audit-Id: a39df43c-8e87-4377-ad48-297b9d5cd4b5
	I0717 10:55:16.733285    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.733288    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.733291    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.733483    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"895","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6783 chars]
	I0717 10:55:16.733722    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:55:16.733728    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.733734    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.733737    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.734680    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:55:16.734687    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.734692    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.734695    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.734699    4493 round_trippers.go:580]     Audit-Id: d0fc177b-2d7e-402f-9556-3e23d30f3b53
	I0717 10:55:16.734702    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.734705    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.734708    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.734816    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:55:16.734986    4493 pod_ready.go:92] pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace has status "Ready":"True"
	I0717 10:55:16.734993    4493 pod_ready.go:81] duration metric: took 2.929581ms for pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:16.734999    4493 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:16.735032    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-875000
	I0717 10:55:16.735036    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.735042    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.735046    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.736001    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:55:16.736011    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.736016    4493 round_trippers.go:580]     Audit-Id: 93504772-a15e-4f35-b7f6-1885b347e61f
	I0717 10:55:16.736025    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.736029    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.736032    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.736035    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.736037    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.736120    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-875000","namespace":"kube-system","uid":"b181608e-80a7-4ef3-9702-315fe76bc83b","resourceVersion":"868","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.15:2379","kubernetes.io/config.hash":"be038e8a848fb24f787b8a2643981714","kubernetes.io/config.mirror":"be038e8a848fb24f787b8a2643981714","kubernetes.io/config.seen":"2024-07-17T17:49:49.643438469Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6357 chars]
	I0717 10:55:16.736335    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:55:16.736341    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.736347    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.736352    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.737254    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:55:16.737261    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.737266    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.737270    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.737273    4493 round_trippers.go:580]     Audit-Id: 55641bbe-3856-4eed-8881-fbee0685c13b
	I0717 10:55:16.737276    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.737279    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.737281    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.737424    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:55:16.737593    4493 pod_ready.go:92] pod "etcd-multinode-875000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:55:16.737603    4493 pod_ready.go:81] duration metric: took 2.596542ms for pod "etcd-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:16.737613    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:16.737642    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-875000
	I0717 10:55:16.737647    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.737652    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.737656    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.738744    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:16.738749    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.738753    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.738760    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.738765    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.738770    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.738773    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.738775    4493 round_trippers.go:580]     Audit-Id: cc74545b-3ce4-4efd-b2c5-34a7e572d2e1
	I0717 10:55:16.738947    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-875000","namespace":"kube-system","uid":"994530a7-11e7-4b05-95ec-c77751a6c24d","resourceVersion":"872","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.15:8443","kubernetes.io/config.hash":"61312bb99a3953bb6d2fa540cea05ce5","kubernetes.io/config.mirror":"61312bb99a3953bb6d2fa540cea05ce5","kubernetes.io/config.seen":"2024-07-17T17:49:49.643441506Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0717 10:55:16.739187    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:55:16.739194    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.739199    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.739204    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.740519    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:16.740534    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.740543    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.740560    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.740566    4493 round_trippers.go:580]     Audit-Id: 652d62d1-ce80-4f6d-9576-03a17e0b8937
	I0717 10:55:16.740571    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.740574    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.740583    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.740761    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:55:16.740940    4493 pod_ready.go:92] pod "kube-apiserver-multinode-875000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:55:16.740947    4493 pod_ready.go:81] duration metric: took 3.329334ms for pod "kube-apiserver-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:16.740954    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:16.740988    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-875000
	I0717 10:55:16.740993    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.740998    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.741002    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.741960    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:55:16.741968    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.741972    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.741982    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.741987    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.741991    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.741995    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.741998    4493 round_trippers.go:580]     Audit-Id: 7b3c7d51-ad80-40fb-9acb-44ca7ff96048
	I0717 10:55:16.742169    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-875000","namespace":"kube-system","uid":"10a5876c-ddf6-4f37-82ca-96ea7ebde028","resourceVersion":"875","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2fd022622c52536f8b0b923ecacc7ea2","kubernetes.io/config.mirror":"2fd022622c52536f8b0b923ecacc7ea2","kubernetes.io/config.seen":"2024-07-17T17:49:49.643442180Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0717 10:55:16.742397    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:55:16.742404    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.742409    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.742413    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.743376    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:55:16.743387    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.743393    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.743398    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.743401    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.743406    4493 round_trippers.go:580]     Audit-Id: 6179a71e-5ca2-4260-a1bd-55b324d233c6
	I0717 10:55:16.743409    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.743412    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.743494    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:55:16.743716    4493 pod_ready.go:92] pod "kube-controller-manager-multinode-875000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:55:16.743726    4493 pod_ready.go:81] duration metric: took 2.766569ms for pod "kube-controller-manager-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:16.743740    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dnn4j" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:16.901292    4493 request.go:629] Waited for 157.497354ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dnn4j
	I0717 10:55:16.901464    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dnn4j
	I0717 10:55:16.901474    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.901493    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.901499    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.903974    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:16.903986    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.903995    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:17 GMT
	I0717 10:55:16.904003    4493 round_trippers.go:580]     Audit-Id: 4803e979-7eae-4122-8947-58ccbc9c8733
	I0717 10:55:16.904009    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.904015    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.904019    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.904027    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.904324    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dnn4j","generateName":"kube-proxy-","namespace":"kube-system","uid":"fd7faf4d-f212-4c89-9ac5-8e408c295411","resourceVersion":"930","creationTimestamp":"2024-07-17T17:51:33Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8a69f9ff-027d-455b-b65a-6c9aef9936e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:51:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a69f9ff-027d-455b-b65a-6c9aef9936e7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6056 chars]
	I0717 10:55:17.102759    4493 request.go:629] Waited for 198.049639ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m03
	I0717 10:55:17.102859    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m03
	I0717 10:55:17.102870    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:17.102882    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:17.102889    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:17.105906    4493 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:55:17.105923    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:17.105931    4493 round_trippers.go:580]     Audit-Id: 76d0cd12-471c-4e18-86a3-adac6efe39d4
	I0717 10:55:17.105935    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:17.105938    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:17.105941    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:17.105944    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:17.105950    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:17 GMT
	I0717 10:55:17.106403    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m03","uid":"4dcfd269-94b0-4652-bd6d-b7d938fc2b6d","resourceVersion":"941","creationTimestamp":"2024-07-17T17:52:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_52_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:52:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 4397 chars]
	I0717 10:55:17.106648    4493 pod_ready.go:97] node "multinode-875000-m03" hosting pod "kube-proxy-dnn4j" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000-m03" has status "Ready":"Unknown"
	I0717 10:55:17.106662    4493 pod_ready.go:81] duration metric: took 362.907614ms for pod "kube-proxy-dnn4j" in "kube-system" namespace to be "Ready" ...
	E0717 10:55:17.106694    4493 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-875000-m03" hosting pod "kube-proxy-dnn4j" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000-m03" has status "Ready":"Unknown"
	I0717 10:55:17.106709    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tp2zz" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:17.301730    4493 request.go:629] Waited for 194.949264ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp2zz
	I0717 10:55:17.301801    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp2zz
	I0717 10:55:17.301817    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:17.301828    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:17.301835    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:17.304783    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:17.304798    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:17.304805    4493 round_trippers.go:580]     Audit-Id: 943cc64b-404f-4f7f-937f-11ed72b7e6ec
	I0717 10:55:17.304809    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:17.304821    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:17.304827    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:17.304831    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:17.304834    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:17 GMT
	I0717 10:55:17.304949    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tp2zz","generateName":"kube-proxy-","namespace":"kube-system","uid":"9fda8ef7-b324-4cbb-a8d9-98f93132b2e7","resourceVersion":"997","creationTimestamp":"2024-07-17T17:50:42Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8a69f9ff-027d-455b-b65a-6c9aef9936e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a69f9ff-027d-455b-b65a-6c9aef9936e7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0717 10:55:17.502523    4493 request.go:629] Waited for 197.149589ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:17.502710    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:17.502722    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:17.502732    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:17.502741    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:17.505415    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:17.505430    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:17.505438    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:17 GMT
	I0717 10:55:17.505443    4493 round_trippers.go:580]     Audit-Id: 901a17f3-78d9-41a7-ac16-c8c49a561782
	I0717 10:55:17.505448    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:17.505452    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:17.505464    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:17.505471    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:17.505728    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1018","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0717 10:55:17.505964    4493 pod_ready.go:92] pod "kube-proxy-tp2zz" in "kube-system" namespace has status "Ready":"True"
	I0717 10:55:17.505975    4493 pod_ready.go:81] duration metric: took 399.244269ms for pod "kube-proxy-tp2zz" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:17.505983    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zs8f8" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:17.700691    4493 request.go:629] Waited for 194.656124ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zs8f8
	I0717 10:55:17.700818    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zs8f8
	I0717 10:55:17.700826    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:17.700837    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:17.700843    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:17.703408    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:17.703421    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:17.703428    4493 round_trippers.go:580]     Audit-Id: 1dc7e35c-f750-44d3-8764-34f022e1e8ef
	I0717 10:55:17.703433    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:17.703449    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:17.703457    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:17.703461    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:17.703468    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:17 GMT
	I0717 10:55:17.703710    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zs8f8","generateName":"kube-proxy-","namespace":"kube-system","uid":"9e2bce56-d9e0-42a1-a265-4aab3577b031","resourceVersion":"774","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8a69f9ff-027d-455b-b65a-6c9aef9936e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a69f9ff-027d-455b-b65a-6c9aef9936e7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0717 10:55:17.902542    4493 request.go:629] Waited for 198.454297ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:55:17.902695    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:55:17.902704    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:17.902716    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:17.902726    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:17.906048    4493 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:55:17.906064    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:17.906071    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:17.906074    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:18 GMT
	I0717 10:55:17.906079    4493 round_trippers.go:580]     Audit-Id: f954c10c-4ffe-4bd7-b2c4-32df06ff1c24
	I0717 10:55:17.906082    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:17.906087    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:17.906092    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:17.906213    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:55:17.906477    4493 pod_ready.go:92] pod "kube-proxy-zs8f8" in "kube-system" namespace has status "Ready":"True"
	I0717 10:55:17.906489    4493 pod_ready.go:81] duration metric: took 400.48952ms for pod "kube-proxy-zs8f8" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:17.906497    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:18.100806    4493 request.go:629] Waited for 194.255923ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-875000
	I0717 10:55:18.100944    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-875000
	I0717 10:55:18.100963    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:18.100976    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:18.100983    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:18.103096    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:18.103109    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:18.103116    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:18.103123    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:18.103127    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:18.103131    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:18 GMT
	I0717 10:55:18.103136    4493 round_trippers.go:580]     Audit-Id: 8a4a63de-1847-4f14-a54c-b57984d5fa46
	I0717 10:55:18.103139    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:18.103454    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-875000","namespace":"kube-system","uid":"b2f1c23d-635b-490e-a964-c28e1566ead0","resourceVersion":"877","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0081164f6599d019e7f075e4c8b7277","kubernetes.io/config.mirror":"e0081164f6599d019e7f075e4c8b7277","kubernetes.io/config.seen":"2024-07-17T17:49:49.643442746Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0717 10:55:18.301075    4493 request.go:629] Waited for 197.262235ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:55:18.301191    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:55:18.301201    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:18.301212    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:18.301218    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:18.304146    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:18.304161    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:18.304171    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:18.304180    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:18.304187    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:18 GMT
	I0717 10:55:18.304192    4493 round_trippers.go:580]     Audit-Id: e40f13c8-894a-468f-8a32-af0fe283917f
	I0717 10:55:18.304197    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:18.304202    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:18.304481    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:55:18.304750    4493 pod_ready.go:92] pod "kube-scheduler-multinode-875000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:55:18.304761    4493 pod_ready.go:81] duration metric: took 398.246673ms for pod "kube-scheduler-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:18.304770    4493 pod_ready.go:38] duration metric: took 1.578152715s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:55:18.304783    4493 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 10:55:18.304838    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:55:18.316091    4493 system_svc.go:56] duration metric: took 11.304975ms WaitForService to wait for kubelet
	I0717 10:55:18.316107    4493 kubeadm.go:582] duration metric: took 16.805246753s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:55:18.316118    4493 node_conditions.go:102] verifying NodePressure condition ...
	I0717 10:55:18.502715    4493 request.go:629] Waited for 186.547114ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes
	I0717 10:55:18.502771    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes
	I0717 10:55:18.502830    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:18.502845    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:18.502852    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:18.505364    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:18.505377    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:18.505385    4493 round_trippers.go:580]     Audit-Id: b7f8dc13-51c8-4398-8efb-6f0c8d5fe1b4
	I0717 10:55:18.505389    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:18.505396    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:18.505399    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:18.505404    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:18.505410    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:18 GMT
	I0717 10:55:18.505638    4493 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1026"},"items":[{"metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFie
lds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 15419 chars]
	I0717 10:55:18.506053    4493 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:55:18.506062    4493 node_conditions.go:123] node cpu capacity is 2
	I0717 10:55:18.506068    4493 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:55:18.506071    4493 node_conditions.go:123] node cpu capacity is 2
	I0717 10:55:18.506074    4493 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:55:18.506076    4493 node_conditions.go:123] node cpu capacity is 2
	I0717 10:55:18.506079    4493 node_conditions.go:105] duration metric: took 189.952755ms to run NodePressure ...
	I0717 10:55:18.506090    4493 start.go:241] waiting for startup goroutines ...
	I0717 10:55:18.506107    4493 start.go:255] writing updated cluster config ...
	I0717 10:55:18.527533    4493 out.go:177] 
	I0717 10:55:18.548977    4493 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:18.549073    4493 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/config.json ...
	I0717 10:55:18.570746    4493 out.go:177] * Starting "multinode-875000-m03" worker node in "multinode-875000" cluster
	I0717 10:55:18.628649    4493 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:55:18.628682    4493 cache.go:56] Caching tarball of preloaded images
	I0717 10:55:18.628870    4493 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:55:18.628889    4493 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:55:18.629014    4493 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/config.json ...
	I0717 10:55:18.629824    4493 start.go:360] acquireMachinesLock for multinode-875000-m03: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:55:18.629924    4493 start.go:364] duration metric: took 76.587µs to acquireMachinesLock for "multinode-875000-m03"
	I0717 10:55:18.629950    4493 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:55:18.629958    4493 fix.go:54] fixHost starting: m03
	I0717 10:55:18.630382    4493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:55:18.630437    4493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:55:18.639596    4493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53234
	I0717 10:55:18.639967    4493 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:55:18.640309    4493 main.go:141] libmachine: Using API Version  1
	I0717 10:55:18.640320    4493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:55:18.640562    4493 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:55:18.640688    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .DriverName
	I0717 10:55:18.640775    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetState
	I0717 10:55:18.640854    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:55:18.640960    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | hyperkit pid from json: 4459
	I0717 10:55:18.641870    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | hyperkit pid 4459 missing from process table
	I0717 10:55:18.641901    4493 fix.go:112] recreateIfNeeded on multinode-875000-m03: state=Stopped err=<nil>
	I0717 10:55:18.641909    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .DriverName
	W0717 10:55:18.641994    4493 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:55:18.662679    4493 out.go:177] * Restarting existing hyperkit VM for "multinode-875000-m03" ...
	I0717 10:55:18.704694    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .Start
	I0717 10:55:18.704928    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:55:18.705049    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/hyperkit.pid
	I0717 10:55:18.705076    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | Using UUID 9f16c9eb-59a4-416c-922e-880fb325e397
	I0717 10:55:18.731073    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | Generated MAC a2:dd:4c:c6:bd:14
	I0717 10:55:18.731094    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-875000
	I0717 10:55:18.731233    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9f16c9eb-59a4-416c-922e-880fb325e397", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b860)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0717 10:55:18.731260    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9f16c9eb-59a4-416c-922e-880fb325e397", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b860)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0717 10:55:18.731329    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9f16c9eb-59a4-416c-922e-880fb325e397", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/multinode-875000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/bzimage,/Users/j
enkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-875000"}
	I0717 10:55:18.731371    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9f16c9eb-59a4-416c-922e-880fb325e397 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/multinode-875000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/mult
inode-875000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-875000"
	I0717 10:55:18.731388    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:55:18.732842    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 DEBUG: hyperkit: Pid is 4575
	I0717 10:55:18.733381    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | Attempt 0
	I0717 10:55:18.733397    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:55:18.733483    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | hyperkit pid from json: 4575
	I0717 10:55:18.734740    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | Searching for a2:dd:4c:c6:bd:14 in /var/db/dhcpd_leases ...
	I0717 10:55:18.734809    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | Found 16 entries in /var/db/dhcpd_leases!
	I0717 10:55:18.734844    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:de:84:ef:f1:8f:c7 ID:1,de:84:ef:f1:8f:c7 Lease:0x669956d0}
	I0717 10:55:18.734866    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:c1:c6:6d:b5:4e ID:1,92:c1:c6:6d:b5:4e Lease:0x6699568d}
	I0717 10:55:18.734880    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:a2:dd:4c:c6:bd:14 ID:1,a2:dd:4c:c6:bd:14 Lease:0x669804f2}
	I0717 10:55:18.734892    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | Found match: a2:dd:4c:c6:bd:14
	I0717 10:55:18.734898    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetConfigRaw
	I0717 10:55:18.734900    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | IP: 192.169.0.17
	I0717 10:55:18.735577    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetIP
	I0717 10:55:18.735784    4493 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/config.json ...
	I0717 10:55:18.736385    4493 machine.go:94] provisionDockerMachine start ...
	I0717 10:55:18.736400    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .DriverName
	I0717 10:55:18.736541    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:18.736645    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:18.736777    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:18.736923    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:18.737028    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:18.737169    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:55:18.737328    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0717 10:55:18.737335    4493 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:55:18.740480    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:55:18.748625    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:55:18.749581    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:55:18.749607    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:55:18.749642    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:55:18.749658    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:55:19.130189    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:55:19.130205    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:55:19.244919    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:55:19.244940    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:55:19.244950    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:55:19.244957    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:55:19.245817    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:55:19.245830    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:55:24.518017    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:24 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:55:24.518034    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:24 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:55:24.518044    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:24 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:55:24.541492    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:24 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:55:29.791395    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:55:29.791411    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetMachineName
	I0717 10:55:29.791539    4493 buildroot.go:166] provisioning hostname "multinode-875000-m03"
	I0717 10:55:29.791552    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetMachineName
	I0717 10:55:29.791647    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:29.791738    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:29.791848    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:29.791945    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:29.792076    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:29.792213    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:55:29.792363    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0717 10:55:29.792371    4493 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-875000-m03 && echo "multinode-875000-m03" | sudo tee /etc/hostname
	I0717 10:55:29.851886    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-875000-m03
	
	I0717 10:55:29.851902    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:29.852032    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:29.852125    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:29.852225    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:29.852326    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:29.852459    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:55:29.852609    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0717 10:55:29.852623    4493 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-875000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-875000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-875000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:55:29.906344    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:55:29.906360    4493 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:55:29.906369    4493 buildroot.go:174] setting up certificates
	I0717 10:55:29.906375    4493 provision.go:84] configureAuth start
	I0717 10:55:29.906381    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetMachineName
	I0717 10:55:29.906511    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetIP
	I0717 10:55:29.906606    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:29.906696    4493 provision.go:143] copyHostCerts
	I0717 10:55:29.906725    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:55:29.906772    4493 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:55:29.906778    4493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:55:29.906974    4493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:55:29.907207    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:55:29.907238    4493 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:55:29.907242    4493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:55:29.907311    4493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:55:29.907458    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:55:29.907486    4493 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:55:29.907491    4493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:55:29.907583    4493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:55:29.907755    4493 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.multinode-875000-m03 san=[127.0.0.1 192.169.0.17 localhost minikube multinode-875000-m03]
	I0717 10:55:30.133100    4493 provision.go:177] copyRemoteCerts
	I0717 10:55:30.133152    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:55:30.133168    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:30.133312    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:30.133411    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:30.133487    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:30.133564    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/id_rsa Username:docker}
	I0717 10:55:30.172522    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:55:30.172601    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:55:30.199016    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:55:30.199089    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 10:55:30.218622    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:55:30.218695    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:55:30.238961    4493 provision.go:87] duration metric: took 332.569934ms to configureAuth
	I0717 10:55:30.238975    4493 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:55:30.239137    4493 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:30.239151    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .DriverName
	I0717 10:55:30.239286    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:30.239379    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:30.239464    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:30.239546    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:30.239624    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:30.239731    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:55:30.239854    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0717 10:55:30.239861    4493 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:55:30.288639    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:55:30.288652    4493 buildroot.go:70] root file system type: tmpfs
	I0717 10:55:30.288720    4493 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:55:30.288732    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:30.288866    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:30.288964    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:30.289045    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:30.289128    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:30.289245    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:55:30.289386    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0717 10:55:30.289435    4493 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.15"
	Environment="NO_PROXY=192.169.0.15,192.169.0.16"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:55:30.348406    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.15
	Environment=NO_PROXY=192.169.0.15,192.169.0.16
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:55:30.348425    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:30.348572    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:30.348661    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:30.348756    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:30.348839    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:30.348982    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:55:30.349145    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0717 10:55:30.349158    4493 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:55:31.884992    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:55:31.885007    4493 machine.go:97] duration metric: took 13.148260509s to provisionDockerMachine
	I0717 10:55:31.885019    4493 start.go:293] postStartSetup for "multinode-875000-m03" (driver="hyperkit")
	I0717 10:55:31.885027    4493 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:55:31.885038    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .DriverName
	I0717 10:55:31.885202    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:55:31.885213    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:31.885301    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:31.885388    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:31.885478    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:31.885566    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/id_rsa Username:docker}
	I0717 10:55:31.916267    4493 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:55:31.919172    4493 command_runner.go:130] > NAME=Buildroot
	I0717 10:55:31.919184    4493 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0717 10:55:31.919188    4493 command_runner.go:130] > ID=buildroot
	I0717 10:55:31.919192    4493 command_runner.go:130] > VERSION_ID=2023.02.9
	I0717 10:55:31.919213    4493 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0717 10:55:31.919404    4493 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:55:31.919414    4493 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:55:31.919495    4493 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:55:31.919638    4493 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:55:31.919645    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:55:31.919804    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:55:31.927953    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:55:31.947599    4493 start.go:296] duration metric: took 62.569789ms for postStartSetup
	I0717 10:55:31.947621    4493 fix.go:56] duration metric: took 13.317307309s for fixHost
	I0717 10:55:31.947655    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:31.947813    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:31.947906    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:31.947995    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:31.948094    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:31.948200    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:55:31.948331    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0717 10:55:31.948338    4493 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 10:55:31.998271    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721238931.935250312
	
	I0717 10:55:31.998281    4493 fix.go:216] guest clock: 1721238931.935250312
	I0717 10:55:31.998286    4493 fix.go:229] Guest: 2024-07-17 10:55:31.935250312 -0700 PDT Remote: 2024-07-17 10:55:31.947629 -0700 PDT m=+143.539947732 (delta=-12.378688ms)
	I0717 10:55:31.998305    4493 fix.go:200] guest clock delta is within tolerance: -12.378688ms
	I0717 10:55:31.998310    4493 start.go:83] releasing machines lock for "multinode-875000-m03", held for 13.368017038s
	I0717 10:55:31.998327    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .DriverName
	I0717 10:55:31.998458    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetIP
	I0717 10:55:32.037875    4493 out.go:177] * Found network options:
	I0717 10:55:32.059947    4493 out.go:177]   - NO_PROXY=192.169.0.15,192.169.0.16
	W0717 10:55:32.081831    4493 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:55:32.081867    4493 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:55:32.081887    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .DriverName
	I0717 10:55:32.082744    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .DriverName
	I0717 10:55:32.082998    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .DriverName
	I0717 10:55:32.083119    4493 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:55:32.083157    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	W0717 10:55:32.083270    4493 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:55:32.083295    4493 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:55:32.083350    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:32.083379    4493 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:55:32.083397    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:32.083535    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:32.083563    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:32.083740    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:32.083784    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:32.083929    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/id_rsa Username:docker}
	I0717 10:55:32.083987    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:32.084131    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/id_rsa Username:docker}
	I0717 10:55:32.111935    4493 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 10:55:32.111960    4493 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:55:32.112015    4493 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:55:32.160826    4493 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 10:55:32.161606    4493 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0717 10:55:32.161649    4493 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:55:32.161660    4493 start.go:495] detecting cgroup driver to use...
	I0717 10:55:32.161730    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:55:32.185179    4493 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0717 10:55:32.185265    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:55:32.194356    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:55:32.203631    4493 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:55:32.203692    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:55:32.216813    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:55:32.229375    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:55:32.240069    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:55:32.248338    4493 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:55:32.256655    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:55:32.264843    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:55:32.273009    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:55:32.281296    4493 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:55:32.288733    4493 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 10:55:32.288815    4493 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:55:32.296275    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:55:32.385477    4493 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:55:32.403708    4493 start.go:495] detecting cgroup driver to use...
	I0717 10:55:32.403776    4493 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:55:32.419867    4493 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0717 10:55:32.420295    4493 command_runner.go:130] > [Unit]
	I0717 10:55:32.420304    4493 command_runner.go:130] > Description=Docker Application Container Engine
	I0717 10:55:32.420309    4493 command_runner.go:130] > Documentation=https://docs.docker.com
	I0717 10:55:32.420314    4493 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0717 10:55:32.420319    4493 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0717 10:55:32.420329    4493 command_runner.go:130] > StartLimitBurst=3
	I0717 10:55:32.420333    4493 command_runner.go:130] > StartLimitIntervalSec=60
	I0717 10:55:32.420337    4493 command_runner.go:130] > [Service]
	I0717 10:55:32.420340    4493 command_runner.go:130] > Type=notify
	I0717 10:55:32.420344    4493 command_runner.go:130] > Restart=on-failure
	I0717 10:55:32.420347    4493 command_runner.go:130] > Environment=NO_PROXY=192.169.0.15
	I0717 10:55:32.420353    4493 command_runner.go:130] > Environment=NO_PROXY=192.169.0.15,192.169.0.16
	I0717 10:55:32.420360    4493 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0717 10:55:32.420368    4493 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0717 10:55:32.420374    4493 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0717 10:55:32.420380    4493 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0717 10:55:32.420386    4493 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0717 10:55:32.420392    4493 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0717 10:55:32.420400    4493 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0717 10:55:32.420405    4493 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0717 10:55:32.420411    4493 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0717 10:55:32.420414    4493 command_runner.go:130] > ExecStart=
	I0717 10:55:32.420429    4493 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0717 10:55:32.420435    4493 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0717 10:55:32.420441    4493 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0717 10:55:32.420447    4493 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0717 10:55:32.420451    4493 command_runner.go:130] > LimitNOFILE=infinity
	I0717 10:55:32.420456    4493 command_runner.go:130] > LimitNPROC=infinity
	I0717 10:55:32.420460    4493 command_runner.go:130] > LimitCORE=infinity
	I0717 10:55:32.420465    4493 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0717 10:55:32.420469    4493 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0717 10:55:32.420472    4493 command_runner.go:130] > TasksMax=infinity
	I0717 10:55:32.420475    4493 command_runner.go:130] > TimeoutStartSec=0
	I0717 10:55:32.420482    4493 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0717 10:55:32.420485    4493 command_runner.go:130] > Delegate=yes
	I0717 10:55:32.420494    4493 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0717 10:55:32.420499    4493 command_runner.go:130] > KillMode=process
	I0717 10:55:32.420502    4493 command_runner.go:130] > [Install]
	I0717 10:55:32.420505    4493 command_runner.go:130] > WantedBy=multi-user.target
	I0717 10:55:32.420579    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:55:32.431667    4493 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:55:32.449419    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:55:32.460531    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:55:32.470937    4493 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:55:32.494067    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:55:32.504843    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:55:32.519369    4493 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0717 10:55:32.519610    4493 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:55:32.522315    4493 command_runner.go:130] > /usr/bin/cri-dockerd
	I0717 10:55:32.522496    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:55:32.529531    4493 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:55:32.542789    4493 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:55:32.634151    4493 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:55:32.745594    4493 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:55:32.745625    4493 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:55:32.759807    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:55:32.847881    4493 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:56:33.759281    4493 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0717 10:56:33.759296    4493 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0717 10:56:33.759308    4493 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.909780876s)
	I0717 10:56:33.759377    4493 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0717 10:56:33.768846    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 systemd[1]: Starting Docker Application Container Engine...
	I0717 10:56:33.768860    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:30.542486449Z" level=info msg="Starting up"
	I0717 10:56:33.768873    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:30.543020597Z" level=info msg="containerd not running, starting managed containerd"
	I0717 10:56:33.768888    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:30.543629257Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=497
	I0717 10:56:33.768898    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.563879235Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	I0717 10:56:33.768908    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578639071Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0717 10:56:33.768918    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578688475Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0717 10:56:33.768927    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578734687Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0717 10:56:33.768937    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578744907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0717 10:56:33.768948    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578880671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0717 10:56:33.768965    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578915546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0717 10:56:33.768985    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579089229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0717 10:56:33.768995    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579124372Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0717 10:56:33.769006    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579137516Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0717 10:56:33.769015    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579155509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0717 10:56:33.769025    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579257039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0717 10:56:33.769034    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579442615Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0717 10:56:33.769048    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581063677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0717 10:56:33.769057    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581103793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0717 10:56:33.769143    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581217146Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0717 10:56:33.769156    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581251600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0717 10:56:33.769166    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581368444Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0717 10:56:33.769174    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581415284Z" level=info msg="metadata content store policy set" policy=shared
	I0717 10:56:33.769184    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582705517Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0717 10:56:33.769193    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582728255Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0717 10:56:33.769201    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582738757Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0717 10:56:33.769210    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582749147Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0717 10:56:33.769222    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582757689Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0717 10:56:33.769231    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582813384Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0717 10:56:33.769239    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583020255Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0717 10:56:33.769248    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583090475Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0717 10:56:33.769257    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583101536Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0717 10:56:33.769266    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583109897Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0717 10:56:33.769276    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583118535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0717 10:56:33.769286    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583127458Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0717 10:56:33.769295    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583135620Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0717 10:56:33.769304    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583144927Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0717 10:56:33.769315    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583153844Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0717 10:56:33.769326    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583165258Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0717 10:56:33.769483    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583174183Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0717 10:56:33.769499    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583181925Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0717 10:56:33.769508    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583194324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769517    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583203455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769526    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583212086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769535    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583221149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769544    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583229489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769556    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583238022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769565    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583251699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769574    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583263339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769583    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583271970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769592    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583281243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769602    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583288865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769611    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583296689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769620    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583305583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769629    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583318438Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0717 10:56:33.769637    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583332773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769646    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583341417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769655    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583349074Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0717 10:56:33.769665    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583375670Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0717 10:56:33.769676    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583386642Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0717 10:56:33.769686    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583394389Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0717 10:56:33.769810    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583402289Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0717 10:56:33.769821    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583409057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769836    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583418556Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0717 10:56:33.769845    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583425769Z" level=info msg="NRI interface is disabled by configuration."
	I0717 10:56:33.769854    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583559218Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0717 10:56:33.769861    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583617368Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0717 10:56:33.769870    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583645404Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0717 10:56:33.769877    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583678442Z" level=info msg="containerd successfully booted in 0.021002s"
	I0717 10:56:33.769885    4493 command_runner.go:130] > Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.566115906Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0717 10:56:33.769893    4493 command_runner.go:130] > Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.581160310Z" level=info msg="Loading containers: start."
	I0717 10:56:33.769912    4493 command_runner.go:130] > Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.678906471Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0717 10:56:33.769923    4493 command_runner.go:130] > Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.740696250Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0717 10:56:33.769931    4493 command_runner.go:130] > Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.786371404Z" level=info msg="Loading containers: done."
	I0717 10:56:33.769941    4493 command_runner.go:130] > Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.795512822Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0717 10:56:33.769948    4493 command_runner.go:130] > Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.795668358Z" level=info msg="Daemon has completed initialization"
	I0717 10:56:33.769956    4493 command_runner.go:130] > Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.818093328Z" level=info msg="API listen on /var/run/docker.sock"
	I0717 10:56:33.769963    4493 command_runner.go:130] > Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.818285430Z" level=info msg="API listen on [::]:2376"
	I0717 10:56:33.769969    4493 command_runner.go:130] > Jul 17 17:55:31 multinode-875000-m03 systemd[1]: Started Docker Application Container Engine.
	I0717 10:56:33.769976    4493 command_runner.go:130] > Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.813799949Z" level=info msg="Processing signal 'terminated'"
	I0717 10:56:33.769983    4493 command_runner.go:130] > Jul 17 17:55:32 multinode-875000-m03 systemd[1]: Stopping Docker Application Container Engine...
	I0717 10:56:33.769992    4493 command_runner.go:130] > Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.815030335Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0717 10:56:33.770005    4493 command_runner.go:130] > Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.815161263Z" level=info msg="Daemon shutdown complete"
	I0717 10:56:33.770014    4493 command_runner.go:130] > Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.815281374Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0717 10:56:33.770046    4493 command_runner.go:130] > Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.815427332Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0717 10:56:33.770053    4493 command_runner.go:130] > Jul 17 17:55:33 multinode-875000-m03 systemd[1]: docker.service: Deactivated successfully.
	I0717 10:56:33.770059    4493 command_runner.go:130] > Jul 17 17:55:33 multinode-875000-m03 systemd[1]: Stopped Docker Application Container Engine.
	I0717 10:56:33.770066    4493 command_runner.go:130] > Jul 17 17:55:33 multinode-875000-m03 systemd[1]: Starting Docker Application Container Engine...
	I0717 10:56:33.770073    4493 command_runner.go:130] > Jul 17 17:55:33 multinode-875000-m03 dockerd[853]: time="2024-07-17T17:55:33.852812593Z" level=info msg="Starting up"
	I0717 10:56:33.770084    4493 command_runner.go:130] > Jul 17 17:56:33 multinode-875000-m03 dockerd[853]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0717 10:56:33.770091    4493 command_runner.go:130] > Jul 17 17:56:33 multinode-875000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0717 10:56:33.770098    4493 command_runner.go:130] > Jul 17 17:56:33 multinode-875000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0717 10:56:33.770105    4493 command_runner.go:130] > Jul 17 17:56:33 multinode-875000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	I0717 10:56:33.794710    4493 out.go:177] 
	W0717 10:56:33.816416    4493 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 17:55:30 multinode-875000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 17:55:30 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:30.542486449Z" level=info msg="Starting up"
	Jul 17 17:55:30 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:30.543020597Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 17:55:30 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:30.543629257Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=497
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.563879235Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578639071Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578688475Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578734687Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578744907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578880671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578915546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579089229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579124372Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579137516Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579155509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579257039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579442615Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581063677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581103793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581217146Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581251600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581368444Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581415284Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582705517Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582728255Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582738757Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582749147Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582757689Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582813384Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583020255Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583090475Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583101536Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583109897Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583118535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583127458Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583135620Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583144927Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583153844Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583165258Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583174183Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583181925Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583194324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583203455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583212086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583221149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583229489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583238022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583251699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583263339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583271970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583281243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583288865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583296689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583305583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583318438Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583332773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583341417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583349074Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583375670Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583386642Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583394389Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583402289Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583409057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583418556Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583425769Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583559218Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583617368Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583645404Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583678442Z" level=info msg="containerd successfully booted in 0.021002s"
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.566115906Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.581160310Z" level=info msg="Loading containers: start."
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.678906471Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.740696250Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.786371404Z" level=info msg="Loading containers: done."
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.795512822Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.795668358Z" level=info msg="Daemon has completed initialization"
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.818093328Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.818285430Z" level=info msg="API listen on [::]:2376"
	Jul 17 17:55:31 multinode-875000-m03 systemd[1]: Started Docker Application Container Engine.
	Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.813799949Z" level=info msg="Processing signal 'terminated'"
	Jul 17 17:55:32 multinode-875000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.815030335Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.815161263Z" level=info msg="Daemon shutdown complete"
	Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.815281374Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.815427332Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 17:55:33 multinode-875000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 17:55:33 multinode-875000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 17:55:33 multinode-875000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 17:55:33 multinode-875000-m03 dockerd[853]: time="2024-07-17T17:55:33.852812593Z" level=info msg="Starting up"
	Jul 17 17:56:33 multinode-875000-m03 dockerd[853]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 17:56:33 multinode-875000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 17:56:33 multinode-875000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 17:56:33 multinode-875000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 17:55:30 multinode-875000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 17:55:30 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:30.542486449Z" level=info msg="Starting up"
	Jul 17 17:55:30 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:30.543020597Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 17:55:30 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:30.543629257Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=497
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.563879235Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578639071Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578688475Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578734687Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578744907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578880671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578915546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579089229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579124372Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579137516Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579155509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579257039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579442615Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581063677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581103793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581217146Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581251600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581368444Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581415284Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582705517Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582728255Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582738757Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582749147Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582757689Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582813384Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583020255Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583090475Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583101536Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583109897Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583118535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583127458Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583135620Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583144927Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583153844Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583165258Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583174183Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583181925Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583194324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583203455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583212086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583221149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583229489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583238022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583251699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583263339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583271970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583281243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583288865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583296689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583305583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583318438Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583332773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583341417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583349074Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583375670Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583386642Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583394389Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583402289Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583409057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583418556Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583425769Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583559218Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583617368Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583645404Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583678442Z" level=info msg="containerd successfully booted in 0.021002s"
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.566115906Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.581160310Z" level=info msg="Loading containers: start."
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.678906471Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.740696250Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.786371404Z" level=info msg="Loading containers: done."
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.795512822Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.795668358Z" level=info msg="Daemon has completed initialization"
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.818093328Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.818285430Z" level=info msg="API listen on [::]:2376"
	Jul 17 17:55:31 multinode-875000-m03 systemd[1]: Started Docker Application Container Engine.
	Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.813799949Z" level=info msg="Processing signal 'terminated'"
	Jul 17 17:55:32 multinode-875000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.815030335Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.815161263Z" level=info msg="Daemon shutdown complete"
	Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.815281374Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.815427332Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 17:55:33 multinode-875000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 17:55:33 multinode-875000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 17:55:33 multinode-875000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 17:55:33 multinode-875000-m03 dockerd[853]: time="2024-07-17T17:55:33.852812593Z" level=info msg="Starting up"
	Jul 17 17:56:33 multinode-875000-m03 dockerd[853]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 17:56:33 multinode-875000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 17:56:33 multinode-875000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 17:56:33 multinode-875000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0717 10:56:33.816540    4493 out.go:239] * 
	* 
	W0717 10:56:33.817502    4493 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:56:33.879510    4493 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-875000" : exit status 90
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-875000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-875000 -n multinode-875000
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-875000 logs -n 25: (2.895373573s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                            Args                                                             |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-875000 ssh -n                                                                                                     | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	|         | multinode-875000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-875000 cp multinode-875000-m02:/home/docker/cp-test.txt                                                           | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1394758527/001/cp-test_multinode-875000-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-875000 ssh -n                                                                                                     | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	|         | multinode-875000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-875000 cp multinode-875000-m02:/home/docker/cp-test.txt                                                           | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	|         | multinode-875000:/home/docker/cp-test_multinode-875000-m02_multinode-875000.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-875000 ssh -n                                                                                                     | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	|         | multinode-875000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-875000 ssh -n multinode-875000 sudo cat                                                                           | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	|         | /home/docker/cp-test_multinode-875000-m02_multinode-875000.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-875000 cp multinode-875000-m02:/home/docker/cp-test.txt                                                           | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	|         | multinode-875000-m03:/home/docker/cp-test_multinode-875000-m02_multinode-875000-m03.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-875000 ssh -n                                                                                                     | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	|         | multinode-875000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-875000 ssh -n multinode-875000-m03 sudo cat                                                                       | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	|         | /home/docker/cp-test_multinode-875000-m02_multinode-875000-m03.txt                                                          |                  |         |         |                     |                     |
	| cp      | multinode-875000 cp testdata/cp-test.txt                                                                                    | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	|         | multinode-875000-m03:/home/docker/cp-test.txt                                                                               |                  |         |         |                     |                     |
	| ssh     | multinode-875000 ssh -n                                                                                                     | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	|         | multinode-875000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-875000 cp multinode-875000-m03:/home/docker/cp-test.txt                                                           | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1394758527/001/cp-test_multinode-875000-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-875000 ssh -n                                                                                                     | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	|         | multinode-875000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-875000 cp multinode-875000-m03:/home/docker/cp-test.txt                                                           | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	|         | multinode-875000:/home/docker/cp-test_multinode-875000-m03_multinode-875000.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-875000 ssh -n                                                                                                     | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	|         | multinode-875000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-875000 ssh -n multinode-875000 sudo cat                                                                           | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	|         | /home/docker/cp-test_multinode-875000-m03_multinode-875000.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-875000 cp multinode-875000-m03:/home/docker/cp-test.txt                                                           | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	|         | multinode-875000-m02:/home/docker/cp-test_multinode-875000-m03_multinode-875000-m02.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-875000 ssh -n                                                                                                     | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	|         | multinode-875000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-875000 ssh -n multinode-875000-m02 sudo cat                                                                       | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	|         | /home/docker/cp-test_multinode-875000-m03_multinode-875000-m02.txt                                                          |                  |         |         |                     |                     |
	| node    | multinode-875000 node stop m03                                                                                              | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	| node    | multinode-875000 node start                                                                                                 | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:52 PDT |
	|         | m03 -v=7 --alsologtostderr                                                                                                  |                  |         |         |                     |                     |
	| node    | list -p multinode-875000                                                                                                    | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT |                     |
	| stop    | -p multinode-875000                                                                                                         | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:52 PDT | 17 Jul 24 10:53 PDT |
	| start   | -p multinode-875000                                                                                                         | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT |                     |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	| node    | list -p multinode-875000                                                                                                    | multinode-875000 | jenkins | v1.33.1 | 17 Jul 24 10:56 PDT |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 10:53:08
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 10:53:08.440682    4493 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:53:08.440954    4493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:53:08.440959    4493 out.go:304] Setting ErrFile to fd 2...
	I0717 10:53:08.440963    4493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:53:08.441129    4493 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	I0717 10:53:08.442482    4493 out.go:298] Setting JSON to false
	I0717 10:53:08.464365    4493 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3159,"bootTime":1721235629,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0717 10:53:08.464456    4493 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:53:08.486317    4493 out.go:177] * [multinode-875000] minikube v1.33.1 on Darwin 14.5
	I0717 10:53:08.528134    4493 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:53:08.528198    4493 notify.go:220] Checking for updates...
	I0717 10:53:08.571666    4493 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:53:08.593227    4493 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 10:53:08.614242    4493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:53:08.635073    4493 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	I0717 10:53:08.656076    4493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:53:08.677950    4493 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:53:08.678135    4493 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:53:08.678838    4493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:53:08.678911    4493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:53:08.688398    4493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53167
	I0717 10:53:08.688792    4493 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:53:08.689192    4493 main.go:141] libmachine: Using API Version  1
	I0717 10:53:08.689208    4493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:53:08.689454    4493 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:53:08.689593    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:53:08.719815    4493 out.go:177] * Using the hyperkit driver based on existing profile
	I0717 10:53:08.762231    4493 start.go:297] selected driver: hyperkit
	I0717 10:53:08.762256    4493 start.go:901] validating driver "hyperkit" against &{Name:multinode-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.2 ClusterName:multinode-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.15 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.17 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:53:08.762479    4493 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:53:08.762666    4493 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:53:08.762865    4493 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19283-1099/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0717 10:53:08.772341    4493 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0717 10:53:08.776095    4493 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:53:08.776116    4493 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0717 10:53:08.778717    4493 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:53:08.778754    4493 cni.go:84] Creating CNI manager for ""
	I0717 10:53:08.778762    4493 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 10:53:08.778833    4493 start.go:340] cluster config:
	{Name:multinode-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-875000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.15 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.17 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:53:08.778961    4493 iso.go:125] acquiring lock: {Name:mkf51f842bcc8a77e9c7c50d642c4c76848e96af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:53:08.821198    4493 out.go:177] * Starting "multinode-875000" primary control-plane node in "multinode-875000" cluster
	I0717 10:53:08.842213    4493 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:53:08.842307    4493 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0717 10:53:08.842336    4493 cache.go:56] Caching tarball of preloaded images
	I0717 10:53:08.842535    4493 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:53:08.842553    4493 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:53:08.842741    4493 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/config.json ...
	I0717 10:53:08.843662    4493 start.go:360] acquireMachinesLock for multinode-875000: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:53:08.843803    4493 start.go:364] duration metric: took 84.331µs to acquireMachinesLock for "multinode-875000"
	I0717 10:53:08.843854    4493 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:53:08.843874    4493 fix.go:54] fixHost starting: 
	I0717 10:53:08.844316    4493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:53:08.844355    4493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:53:08.853323    4493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53169
	I0717 10:53:08.853666    4493 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:53:08.854064    4493 main.go:141] libmachine: Using API Version  1
	I0717 10:53:08.854087    4493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:53:08.854307    4493 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:53:08.854421    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:53:08.854517    4493 main.go:141] libmachine: (multinode-875000) Calling .GetState
	I0717 10:53:08.854604    4493 main.go:141] libmachine: (multinode-875000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:53:08.854678    4493 main.go:141] libmachine: (multinode-875000) DBG | hyperkit pid from json: 4146
	I0717 10:53:08.855578    4493 main.go:141] libmachine: (multinode-875000) DBG | hyperkit pid 4146 missing from process table
	I0717 10:53:08.855605    4493 fix.go:112] recreateIfNeeded on multinode-875000: state=Stopped err=<nil>
	I0717 10:53:08.855625    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	W0717 10:53:08.855704    4493 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:53:08.876897    4493 out.go:177] * Restarting existing hyperkit VM for "multinode-875000" ...
	I0717 10:53:08.919115    4493 main.go:141] libmachine: (multinode-875000) Calling .Start
	I0717 10:53:08.919441    4493 main.go:141] libmachine: (multinode-875000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:53:08.919503    4493 main.go:141] libmachine: (multinode-875000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/hyperkit.pid
	I0717 10:53:08.921222    4493 main.go:141] libmachine: (multinode-875000) DBG | hyperkit pid 4146 missing from process table
	I0717 10:53:08.921259    4493 main.go:141] libmachine: (multinode-875000) DBG | pid 4146 is in state "Stopped"
	I0717 10:53:08.921271    4493 main.go:141] libmachine: (multinode-875000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/hyperkit.pid...
	I0717 10:53:08.921823    4493 main.go:141] libmachine: (multinode-875000) DBG | Using UUID 0b492f0d-cc97-495d-b943-8b478d8e6ab6
	I0717 10:53:09.032299    4493 main.go:141] libmachine: (multinode-875000) DBG | Generated MAC 92:c1:c6:6d:b5:4e
	I0717 10:53:09.032323    4493 main.go:141] libmachine: (multinode-875000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-875000
	I0717 10:53:09.032452    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0b492f0d-cc97-495d-b943-8b478d8e6ab6", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037b4a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0717 10:53:09.032486    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0b492f0d-cc97-495d-b943-8b478d8e6ab6", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037b4a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0717 10:53:09.032535    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "0b492f0d-cc97-495d-b943-8b478d8e6ab6", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/multinode-875000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/bzimage,/Users/jenkins/minikube-integration/1928
3-1099/.minikube/machines/multinode-875000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-875000"}
	I0717 10:53:09.032569    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 0b492f0d-cc97-495d-b943-8b478d8e6ab6 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/multinode-875000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-875000"
	I0717 10:53:09.032601    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:53:09.034110    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 DEBUG: hyperkit: Pid is 4506
	I0717 10:53:09.034778    4493 main.go:141] libmachine: (multinode-875000) DBG | Attempt 0
	I0717 10:53:09.034790    4493 main.go:141] libmachine: (multinode-875000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:53:09.034924    4493 main.go:141] libmachine: (multinode-875000) DBG | hyperkit pid from json: 4506
	I0717 10:53:09.036394    4493 main.go:141] libmachine: (multinode-875000) DBG | Searching for 92:c1:c6:6d:b5:4e in /var/db/dhcpd_leases ...
	I0717 10:53:09.036449    4493 main.go:141] libmachine: (multinode-875000) DBG | Found 16 entries in /var/db/dhcpd_leases!
	I0717 10:53:09.036479    4493 main.go:141] libmachine: (multinode-875000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:a2:dd:4c:c6:bd:14 ID:1,a2:dd:4c:c6:bd:14 Lease:0x669804f2}
	I0717 10:53:09.036501    4493 main.go:141] libmachine: (multinode-875000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:de:84:ef:f1:8f:c7 ID:1,de:84:ef:f1:8f:c7 Lease:0x669955e8}
	I0717 10:53:09.036514    4493 main.go:141] libmachine: (multinode-875000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:c1:c6:6d:b5:4e ID:1,92:c1:c6:6d:b5:4e Lease:0x669955a6}
	I0717 10:53:09.036529    4493 main.go:141] libmachine: (multinode-875000) DBG | Found match: 92:c1:c6:6d:b5:4e
	I0717 10:53:09.036559    4493 main.go:141] libmachine: (multinode-875000) DBG | IP: 192.169.0.15
	I0717 10:53:09.036578    4493 main.go:141] libmachine: (multinode-875000) Calling .GetConfigRaw
	I0717 10:53:09.037361    4493 main.go:141] libmachine: (multinode-875000) Calling .GetIP
	I0717 10:53:09.037542    4493 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/config.json ...
	I0717 10:53:09.037966    4493 machine.go:94] provisionDockerMachine start ...
	I0717 10:53:09.037977    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:53:09.038140    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:09.038269    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:09.038374    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:09.038504    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:09.038606    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:09.038734    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:53:09.038932    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0717 10:53:09.038940    4493 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:53:09.041588    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:53:09.095716    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:53:09.096409    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:53:09.096424    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:53:09.096432    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:53:09.096439    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:53:09.474480    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:53:09.474493    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:53:09.589608    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:53:09.589628    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:53:09.589640    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:53:09.589653    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:53:09.590554    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:53:09.590567    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:09 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:53:14.827254    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:14 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:53:14.827270    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:14 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:53:14.827324    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:14 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:53:14.851869    4493 main.go:141] libmachine: (multinode-875000) DBG | 2024/07/17 10:53:14 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:53:44.107008    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:53:44.107032    4493 main.go:141] libmachine: (multinode-875000) Calling .GetMachineName
	I0717 10:53:44.107184    4493 buildroot.go:166] provisioning hostname "multinode-875000"
	I0717 10:53:44.107196    4493 main.go:141] libmachine: (multinode-875000) Calling .GetMachineName
	I0717 10:53:44.107289    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:44.107373    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:44.107466    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.107551    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.107641    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:44.107772    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:53:44.107923    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0717 10:53:44.107931    4493 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-875000 && echo "multinode-875000" | sudo tee /etc/hostname
	I0717 10:53:44.174663    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-875000
	
	I0717 10:53:44.174681    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:44.174841    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:44.174935    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.175018    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.175114    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:44.175243    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:53:44.175395    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0717 10:53:44.175407    4493 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-875000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-875000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-875000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:53:44.239111    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:53:44.239133    4493 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:53:44.239156    4493 buildroot.go:174] setting up certificates
	I0717 10:53:44.239163    4493 provision.go:84] configureAuth start
	I0717 10:53:44.239170    4493 main.go:141] libmachine: (multinode-875000) Calling .GetMachineName
	I0717 10:53:44.239323    4493 main.go:141] libmachine: (multinode-875000) Calling .GetIP
	I0717 10:53:44.239447    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:44.239546    4493 provision.go:143] copyHostCerts
	I0717 10:53:44.239577    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:53:44.239663    4493 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:53:44.239671    4493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:53:44.239873    4493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:53:44.240100    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:53:44.240152    4493 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:53:44.240158    4493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:53:44.240239    4493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:53:44.240396    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:53:44.240438    4493 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:53:44.240443    4493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:53:44.240526    4493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:53:44.240673    4493 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.multinode-875000 san=[127.0.0.1 192.169.0.15 localhost minikube multinode-875000]
	I0717 10:53:44.434192    4493 provision.go:177] copyRemoteCerts
	I0717 10:53:44.434250    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:53:44.434269    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:44.434410    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:44.434519    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.434626    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:44.434715    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/id_rsa Username:docker}
	I0717 10:53:44.468908    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:53:44.468981    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:53:44.488630    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:53:44.488690    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0717 10:53:44.508446    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:53:44.508513    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 10:53:44.528227    4493 provision.go:87] duration metric: took 289.042237ms to configureAuth
	I0717 10:53:44.528241    4493 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:53:44.528415    4493 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:53:44.528429    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:53:44.528563    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:44.528658    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:44.528735    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.528813    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.528888    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:44.529001    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:53:44.529113    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0717 10:53:44.529120    4493 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:53:44.586725    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:53:44.586737    4493 buildroot.go:70] root file system type: tmpfs
	I0717 10:53:44.586812    4493 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:53:44.586825    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:44.586946    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:44.587041    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.587130    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.587206    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:44.587338    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:53:44.587473    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0717 10:53:44.587516    4493 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:53:44.657154    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:53:44.657176    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:44.657326    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:44.657422    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.657530    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:44.657627    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:44.657793    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:53:44.657950    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0717 10:53:44.657962    4493 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:53:46.286868    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:53:46.286885    4493 machine.go:97] duration metric: took 37.247877396s to provisionDockerMachine
	I0717 10:53:46.286899    4493 start.go:293] postStartSetup for "multinode-875000" (driver="hyperkit")
	I0717 10:53:46.286907    4493 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:53:46.286920    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:53:46.287106    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:53:46.287119    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:46.287234    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:46.287334    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:46.287432    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:46.287518    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/id_rsa Username:docker}
	I0717 10:53:46.323841    4493 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:53:46.326746    4493 command_runner.go:130] > NAME=Buildroot
	I0717 10:53:46.326765    4493 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0717 10:53:46.326770    4493 command_runner.go:130] > ID=buildroot
	I0717 10:53:46.326774    4493 command_runner.go:130] > VERSION_ID=2023.02.9
	I0717 10:53:46.326778    4493 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0717 10:53:46.326891    4493 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:53:46.326903    4493 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:53:46.327001    4493 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:53:46.327192    4493 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:53:46.327199    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:53:46.327412    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:53:46.335200    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:53:46.354322    4493 start.go:296] duration metric: took 67.412253ms for postStartSetup
	I0717 10:53:46.354346    4493 fix.go:56] duration metric: took 37.509442863s for fixHost
	I0717 10:53:46.354359    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:46.354492    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:46.354588    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:46.354663    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:46.354756    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:46.354873    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:53:46.355011    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0717 10:53:46.355018    4493 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:53:46.413735    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721238826.487514067
	
	I0717 10:53:46.413746    4493 fix.go:216] guest clock: 1721238826.487514067
	I0717 10:53:46.413751    4493 fix.go:229] Guest: 2024-07-17 10:53:46.487514067 -0700 PDT Remote: 2024-07-17 10:53:46.354349 -0700 PDT m=+37.949500651 (delta=133.165067ms)
	I0717 10:53:46.413777    4493 fix.go:200] guest clock delta is within tolerance: 133.165067ms
	I0717 10:53:46.413782    4493 start.go:83] releasing machines lock for "multinode-875000", held for 37.568918907s
	I0717 10:53:46.413799    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:53:46.413927    4493 main.go:141] libmachine: (multinode-875000) Calling .GetIP
	I0717 10:53:46.414023    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:53:46.414324    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:53:46.414437    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:53:46.414519    4493 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:53:46.414551    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:46.414592    4493 ssh_runner.go:195] Run: cat /version.json
	I0717 10:53:46.414604    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:53:46.414665    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:46.414712    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:53:46.414754    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:46.414804    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:53:46.414827    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:46.414899    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/id_rsa Username:docker}
	I0717 10:53:46.414916    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:53:46.415015    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/id_rsa Username:docker}
	I0717 10:53:46.446050    4493 command_runner.go:130] > {"iso_version": "v1.33.1-1721146474-19264", "kicbase_version": "v0.0.44-1721064868-19249", "minikube_version": "v1.33.1", "commit": "6e0d7ef26437c947028f356d4449a323918e966e"}
	I0717 10:53:46.446222    4493 ssh_runner.go:195] Run: systemctl --version
	I0717 10:53:46.495969    4493 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 10:53:46.497057    4493 command_runner.go:130] > systemd 252 (252)
	I0717 10:53:46.497112    4493 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0717 10:53:46.497244    4493 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:53:46.502202    4493 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 10:53:46.502226    4493 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:53:46.502268    4493 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:53:46.514783    4493 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0717 10:53:46.514802    4493 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:53:46.514814    4493 start.go:495] detecting cgroup driver to use...
	I0717 10:53:46.514919    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:53:46.529710    4493 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0717 10:53:46.529926    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:53:46.538945    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:53:46.547744    4493 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:53:46.547783    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:53:46.556835    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:53:46.565925    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:53:46.574800    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:53:46.583709    4493 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:53:46.592744    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:53:46.601366    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:53:46.610134    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:53:46.619067    4493 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:53:46.627080    4493 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 10:53:46.627236    4493 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:53:46.635244    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:53:46.730124    4493 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:53:46.744976    4493 start.go:495] detecting cgroup driver to use...
	I0717 10:53:46.745053    4493 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:53:46.755963    4493 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0717 10:53:46.756906    4493 command_runner.go:130] > [Unit]
	I0717 10:53:46.756917    4493 command_runner.go:130] > Description=Docker Application Container Engine
	I0717 10:53:46.756922    4493 command_runner.go:130] > Documentation=https://docs.docker.com
	I0717 10:53:46.756927    4493 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0717 10:53:46.756941    4493 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0717 10:53:46.756946    4493 command_runner.go:130] > StartLimitBurst=3
	I0717 10:53:46.756950    4493 command_runner.go:130] > StartLimitIntervalSec=60
	I0717 10:53:46.756954    4493 command_runner.go:130] > [Service]
	I0717 10:53:46.756957    4493 command_runner.go:130] > Type=notify
	I0717 10:53:46.756961    4493 command_runner.go:130] > Restart=on-failure
	I0717 10:53:46.756967    4493 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0717 10:53:46.756975    4493 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0717 10:53:46.756981    4493 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0717 10:53:46.756987    4493 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0717 10:53:46.756992    4493 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0717 10:53:46.756997    4493 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0717 10:53:46.757004    4493 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0717 10:53:46.757013    4493 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0717 10:53:46.757023    4493 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0717 10:53:46.757028    4493 command_runner.go:130] > ExecStart=
	I0717 10:53:46.757044    4493 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0717 10:53:46.757049    4493 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0717 10:53:46.757056    4493 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0717 10:53:46.757062    4493 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0717 10:53:46.757066    4493 command_runner.go:130] > LimitNOFILE=infinity
	I0717 10:53:46.757070    4493 command_runner.go:130] > LimitNPROC=infinity
	I0717 10:53:46.757074    4493 command_runner.go:130] > LimitCORE=infinity
	I0717 10:53:46.757078    4493 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0717 10:53:46.757083    4493 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0717 10:53:46.757087    4493 command_runner.go:130] > TasksMax=infinity
	I0717 10:53:46.757090    4493 command_runner.go:130] > TimeoutStartSec=0
	I0717 10:53:46.757095    4493 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0717 10:53:46.757100    4493 command_runner.go:130] > Delegate=yes
	I0717 10:53:46.757105    4493 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0717 10:53:46.757108    4493 command_runner.go:130] > KillMode=process
	I0717 10:53:46.757113    4493 command_runner.go:130] > [Install]
	I0717 10:53:46.757132    4493 command_runner.go:130] > WantedBy=multi-user.target
	I0717 10:53:46.757267    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:53:46.768312    4493 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:53:46.780908    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:53:46.792555    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:53:46.803455    4493 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:53:46.828999    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:53:46.841901    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:53:46.858496    4493 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0717 10:53:46.858783    4493 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:53:46.861485    4493 command_runner.go:130] > /usr/bin/cri-dockerd
	I0717 10:53:46.861702    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:53:46.868721    4493 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:53:46.882414    4493 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:53:46.980451    4493 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:53:47.086611    4493 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:53:47.086712    4493 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:53:47.101549    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:53:47.197870    4493 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:53:49.510733    4493 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.312762402s)
	I0717 10:53:49.510816    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:53:49.521296    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:53:49.531765    4493 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:53:49.624402    4493 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:53:49.727316    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:53:49.832508    4493 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:53:49.846162    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:53:49.857284    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:53:49.953870    4493 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:53:50.012779    4493 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:53:50.012864    4493 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:53:50.016767    4493 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0717 10:53:50.016783    4493 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 10:53:50.016791    4493 command_runner.go:130] > Device: 0,22	Inode: 758         Links: 1
	I0717 10:53:50.016799    4493 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0717 10:53:50.016803    4493 command_runner.go:130] > Access: 2024-07-17 17:53:50.041558660 +0000
	I0717 10:53:50.016832    4493 command_runner.go:130] > Modify: 2024-07-17 17:53:50.041558660 +0000
	I0717 10:53:50.016837    4493 command_runner.go:130] > Change: 2024-07-17 17:53:50.043558660 +0000
	I0717 10:53:50.016841    4493 command_runner.go:130] >  Birth: -
	I0717 10:53:50.017042    4493 start.go:563] Will wait 60s for crictl version
	I0717 10:53:50.017093    4493 ssh_runner.go:195] Run: which crictl
	I0717 10:53:50.021044    4493 command_runner.go:130] > /usr/bin/crictl
	I0717 10:53:50.021140    4493 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:53:50.046022    4493 command_runner.go:130] > Version:  0.1.0
	I0717 10:53:50.046035    4493 command_runner.go:130] > RuntimeName:  docker
	I0717 10:53:50.046039    4493 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0717 10:53:50.046043    4493 command_runner.go:130] > RuntimeApiVersion:  v1
	I0717 10:53:50.047026    4493 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:53:50.047098    4493 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:53:50.063588    4493 command_runner.go:130] > 27.0.3
	I0717 10:53:50.064527    4493 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:53:50.080049    4493 command_runner.go:130] > 27.0.3
	I0717 10:53:50.126676    4493 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:53:50.126730    4493 main.go:141] libmachine: (multinode-875000) Calling .GetIP
	I0717 10:53:50.127131    4493 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:53:50.132176    4493 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:53:50.141837    4493 kubeadm.go:883] updating cluster {Name:multinode-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.2 ClusterName:multinode-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.15 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.17 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 10:53:50.141927    4493 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:53:50.141982    4493 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 10:53:50.153814    4493 command_runner.go:130] > kindest/kindnetd:v20240715-585640e9
	I0717 10:53:50.153827    4493 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0717 10:53:50.153832    4493 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0717 10:53:50.153836    4493 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0717 10:53:50.153839    4493 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0717 10:53:50.153843    4493 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0717 10:53:50.153847    4493 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0717 10:53:50.153850    4493 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0717 10:53:50.153856    4493 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 10:53:50.153860    4493 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0717 10:53:50.154664    4493 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0717 10:53:50.154675    4493 docker.go:615] Images already preloaded, skipping extraction
	I0717 10:53:50.154745    4493 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 10:53:50.167432    4493 command_runner.go:130] > kindest/kindnetd:v20240715-585640e9
	I0717 10:53:50.167445    4493 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0717 10:53:50.167450    4493 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0717 10:53:50.167455    4493 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0717 10:53:50.167460    4493 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0717 10:53:50.167463    4493 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0717 10:53:50.167468    4493 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0717 10:53:50.167473    4493 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0717 10:53:50.167477    4493 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 10:53:50.167481    4493 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0717 10:53:50.168119    4493 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0717 10:53:50.168135    4493 cache_images.go:84] Images are preloaded, skipping loading
	I0717 10:53:50.168143    4493 kubeadm.go:934] updating node { 192.169.0.15 8443 v1.30.2 docker true true} ...
	I0717 10:53:50.168220    4493 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-875000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:53:50.168290    4493 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 10:53:50.185058    4493 command_runner.go:130] > cgroupfs
	I0717 10:53:50.185937    4493 cni.go:84] Creating CNI manager for ""
	I0717 10:53:50.185946    4493 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 10:53:50.185956    4493 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 10:53:50.185976    4493 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.15 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-875000 NodeName:multinode-875000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 10:53:50.186060    4493 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-875000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 10:53:50.186121    4493 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:53:50.193740    4493 command_runner.go:130] > kubeadm
	I0717 10:53:50.193748    4493 command_runner.go:130] > kubectl
	I0717 10:53:50.193752    4493 command_runner.go:130] > kubelet
	I0717 10:53:50.193938    4493 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:53:50.193982    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 10:53:50.201366    4493 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0717 10:53:50.215766    4493 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:53:50.229396    4493 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0717 10:53:50.243072    4493 ssh_runner.go:195] Run: grep 192.169.0.15	control-plane.minikube.internal$ /etc/hosts
	I0717 10:53:50.246029    4493 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:53:50.255461    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:53:50.344956    4493 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:53:50.360068    4493 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000 for IP: 192.169.0.15
	I0717 10:53:50.360081    4493 certs.go:194] generating shared ca certs ...
	I0717 10:53:50.360092    4493 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:53:50.360278    4493 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:53:50.360353    4493 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:53:50.360363    4493 certs.go:256] generating profile certs ...
	I0717 10:53:50.360474    4493 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/client.key
	I0717 10:53:50.360554    4493 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/apiserver.key.20aa8b3c
	I0717 10:53:50.360623    4493 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/proxy-client.key
	I0717 10:53:50.360630    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:53:50.360651    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:53:50.360669    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:53:50.360687    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:53:50.360705    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 10:53:50.360735    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 10:53:50.360768    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 10:53:50.360788    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 10:53:50.360898    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:53:50.360948    4493 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:53:50.360957    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:53:50.360991    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:53:50.361022    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:53:50.361051    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:53:50.361117    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:53:50.361151    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:53:50.361173    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:53:50.361191    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:53:50.361711    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:53:50.402390    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:53:50.429736    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:53:50.455959    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:53:50.476621    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 10:53:50.496672    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 10:53:50.516680    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 10:53:50.536803    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 10:53:50.556794    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:53:50.576815    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:53:50.596757    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:53:50.616657    4493 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 10:53:50.630640    4493 ssh_runner.go:195] Run: openssl version
	I0717 10:53:50.634718    4493 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0717 10:53:50.634860    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:53:50.643295    4493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:53:50.646578    4493 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:53:50.646698    4493 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:53:50.646737    4493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:53:50.650875    4493 command_runner.go:130] > 3ec20f2e
	I0717 10:53:50.651029    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:53:50.659564    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:53:50.667965    4493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:53:50.671241    4493 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:53:50.671375    4493 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:53:50.671413    4493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:53:50.675460    4493 command_runner.go:130] > b5213941
	I0717 10:53:50.675590    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:53:50.683988    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:53:50.692486    4493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:53:50.695766    4493 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:53:50.695880    4493 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:53:50.695911    4493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:53:50.699964    4493 command_runner.go:130] > 51391683
	I0717 10:53:50.700098    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:53:50.708346    4493 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:53:50.711619    4493 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:53:50.711631    4493 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0717 10:53:50.711638    4493 command_runner.go:130] > Device: 253,1	Inode: 531538      Links: 1
	I0717 10:53:50.711649    4493 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 10:53:50.711656    4493 command_runner.go:130] > Access: 2024-07-17 17:49:41.246167045 +0000
	I0717 10:53:50.711661    4493 command_runner.go:130] > Modify: 2024-07-17 17:49:41.246167045 +0000
	I0717 10:53:50.711665    4493 command_runner.go:130] > Change: 2024-07-17 17:49:41.246167045 +0000
	I0717 10:53:50.711669    4493 command_runner.go:130] >  Birth: 2024-07-17 17:49:41.246167045 +0000
	I0717 10:53:50.711776    4493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 10:53:50.716013    4493 command_runner.go:130] > Certificate will not expire
	I0717 10:53:50.716105    4493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 10:53:50.720269    4493 command_runner.go:130] > Certificate will not expire
	I0717 10:53:50.720423    4493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 10:53:50.724600    4493 command_runner.go:130] > Certificate will not expire
	I0717 10:53:50.724775    4493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 10:53:50.728857    4493 command_runner.go:130] > Certificate will not expire
	I0717 10:53:50.728989    4493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 10:53:50.733108    4493 command_runner.go:130] > Certificate will not expire
	I0717 10:53:50.733337    4493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 10:53:50.737381    4493 command_runner.go:130] > Certificate will not expire
	I0717 10:53:50.737570    4493 kubeadm.go:392] StartCluster: {Name:multinode-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:multinode-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.15 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.17 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:53:50.737674    4493 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 10:53:50.750520    4493 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 10:53:50.758116    4493 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0717 10:53:50.758125    4493 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0717 10:53:50.758130    4493 command_runner.go:130] > /var/lib/minikube/etcd:
	I0717 10:53:50.758136    4493 command_runner.go:130] > member
	I0717 10:53:50.758214    4493 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 10:53:50.758226    4493 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 10:53:50.758268    4493 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 10:53:50.765622    4493 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 10:53:50.765931    4493 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-875000" does not appear in /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:53:50.766023    4493 kubeconfig.go:62] /Users/jenkins/minikube-integration/19283-1099/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-875000" cluster setting kubeconfig missing "multinode-875000" context setting]
	I0717 10:53:50.766197    4493 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:53:50.766873    4493 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:53:50.767061    4493 kapi.go:59] client config for multinode-875000: &rest.Config{Host:"https://192.169.0.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xeec6b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 10:53:50.767404    4493 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 10:53:50.767531    4493 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 10:53:50.774780    4493 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.15
	I0717 10:53:50.774798    4493 kubeadm.go:1160] stopping kube-system containers ...
	I0717 10:53:50.774856    4493 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 10:53:50.788254    4493 command_runner.go:130] > 628249f927da
	I0717 10:53:50.788267    4493 command_runner.go:130] > 29731a7ae130
	I0717 10:53:50.788270    4493 command_runner.go:130] > 8d5379f364df
	I0717 10:53:50.788274    4493 command_runner.go:130] > ed1c80ce77a0
	I0717 10:53:50.788277    4493 command_runner.go:130] > f9b27278d789
	I0717 10:53:50.788280    4493 command_runner.go:130] > cdb993aecac1
	I0717 10:53:50.788284    4493 command_runner.go:130] > 6c2c175018f8
	I0717 10:53:50.788287    4493 command_runner.go:130] > 004a5be3ccef
	I0717 10:53:50.788290    4493 command_runner.go:130] > fbeb1615ce07
	I0717 10:53:50.788294    4493 command_runner.go:130] > 2966fb0e7dc1
	I0717 10:53:50.788303    4493 command_runner.go:130] > 6a219499b617
	I0717 10:53:50.788306    4493 command_runner.go:130] > f441455bef84
	I0717 10:53:50.788311    4493 command_runner.go:130] > 4d352419a758
	I0717 10:53:50.788315    4493 command_runner.go:130] > 3f3c486ee3b8
	I0717 10:53:50.788318    4493 command_runner.go:130] > 4355a2bd64f7
	I0717 10:53:50.788322    4493 command_runner.go:130] > c6831086186c
	I0717 10:53:50.788775    4493 docker.go:483] Stopping containers: [628249f927da 29731a7ae130 8d5379f364df ed1c80ce77a0 f9b27278d789 cdb993aecac1 6c2c175018f8 004a5be3ccef fbeb1615ce07 2966fb0e7dc1 6a219499b617 f441455bef84 4d352419a758 3f3c486ee3b8 4355a2bd64f7 c6831086186c]
	I0717 10:53:50.788852    4493 ssh_runner.go:195] Run: docker stop 628249f927da 29731a7ae130 8d5379f364df ed1c80ce77a0 f9b27278d789 cdb993aecac1 6c2c175018f8 004a5be3ccef fbeb1615ce07 2966fb0e7dc1 6a219499b617 f441455bef84 4d352419a758 3f3c486ee3b8 4355a2bd64f7 c6831086186c
	I0717 10:53:50.804816    4493 command_runner.go:130] > 628249f927da
	I0717 10:53:50.804828    4493 command_runner.go:130] > 29731a7ae130
	I0717 10:53:50.804832    4493 command_runner.go:130] > 8d5379f364df
	I0717 10:53:50.804835    4493 command_runner.go:130] > ed1c80ce77a0
	I0717 10:53:50.804839    4493 command_runner.go:130] > f9b27278d789
	I0717 10:53:50.804860    4493 command_runner.go:130] > cdb993aecac1
	I0717 10:53:50.804869    4493 command_runner.go:130] > 6c2c175018f8
	I0717 10:53:50.804872    4493 command_runner.go:130] > 004a5be3ccef
	I0717 10:53:50.804875    4493 command_runner.go:130] > fbeb1615ce07
	I0717 10:53:50.804879    4493 command_runner.go:130] > 2966fb0e7dc1
	I0717 10:53:50.804883    4493 command_runner.go:130] > 6a219499b617
	I0717 10:53:50.804886    4493 command_runner.go:130] > f441455bef84
	I0717 10:53:50.804889    4493 command_runner.go:130] > 4d352419a758
	I0717 10:53:50.804892    4493 command_runner.go:130] > 3f3c486ee3b8
	I0717 10:53:50.804895    4493 command_runner.go:130] > 4355a2bd64f7
	I0717 10:53:50.804898    4493 command_runner.go:130] > c6831086186c
	I0717 10:53:50.804976    4493 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 10:53:50.817155    4493 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 10:53:50.824524    4493 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0717 10:53:50.824535    4493 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0717 10:53:50.824541    4493 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0717 10:53:50.824547    4493 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 10:53:50.824578    4493 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 10:53:50.824584    4493 kubeadm.go:157] found existing configuration files:
	
	I0717 10:53:50.824624    4493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 10:53:50.831830    4493 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 10:53:50.831856    4493 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 10:53:50.831901    4493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 10:53:50.839256    4493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 10:53:50.846349    4493 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 10:53:50.846368    4493 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 10:53:50.846403    4493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 10:53:50.853788    4493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 10:53:50.861130    4493 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 10:53:50.861146    4493 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 10:53:50.861179    4493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 10:53:50.868477    4493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 10:53:50.875549    4493 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 10:53:50.875573    4493 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 10:53:50.875612    4493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 10:53:50.882814    4493 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 10:53:50.890101    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 10:53:50.951322    4493 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 10:53:50.951454    4493 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0717 10:53:50.951640    4493 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0717 10:53:50.951807    4493 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 10:53:50.952063    4493 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0717 10:53:50.952329    4493 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0717 10:53:50.952683    4493 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0717 10:53:50.952827    4493 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0717 10:53:50.952998    4493 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0717 10:53:50.953189    4493 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 10:53:50.953340    4493 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 10:53:50.953548    4493 command_runner.go:130] > [certs] Using the existing "sa" key
	I0717 10:53:50.954497    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 10:53:50.991733    4493 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 10:53:51.385775    4493 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 10:53:51.709655    4493 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 10:53:51.893900    4493 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 10:53:51.988631    4493 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 10:53:52.421536    4493 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 10:53:52.423448    4493 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.468896803s)
	I0717 10:53:52.423462    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 10:53:52.473231    4493 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 10:53:52.473980    4493 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 10:53:52.474003    4493 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0717 10:53:52.580900    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 10:53:52.635178    4493 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 10:53:52.635192    4493 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 10:53:52.636821    4493 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 10:53:52.643807    4493 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 10:53:52.646004    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 10:53:52.730917    4493 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 10:53:52.740305    4493 api_server.go:52] waiting for apiserver process to appear ...
	I0717 10:53:52.740372    4493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:53:53.240795    4493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:53:53.740591    4493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:53:53.752086    4493 command_runner.go:130] > 1676
	I0717 10:53:53.752559    4493 api_server.go:72] duration metric: took 1.012234226s to wait for apiserver process to appear ...
	I0717 10:53:53.752568    4493 api_server.go:88] waiting for apiserver healthz status ...
	I0717 10:53:53.752583    4493 api_server.go:253] Checking apiserver healthz at https://192.169.0.15:8443/healthz ...
	I0717 10:53:55.371559    4493 api_server.go:279] https://192.169.0.15:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 10:53:55.371577    4493 api_server.go:103] status: https://192.169.0.15:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 10:53:55.371588    4493 api_server.go:253] Checking apiserver healthz at https://192.169.0.15:8443/healthz ...
	I0717 10:53:55.398484    4493 api_server.go:279] https://192.169.0.15:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 10:53:55.398499    4493 api_server.go:103] status: https://192.169.0.15:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 10:53:55.753909    4493 api_server.go:253] Checking apiserver healthz at https://192.169.0.15:8443/healthz ...
	I0717 10:53:55.758845    4493 api_server.go:279] https://192.169.0.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 10:53:55.758858    4493 api_server.go:103] status: https://192.169.0.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 10:53:56.254786    4493 api_server.go:253] Checking apiserver healthz at https://192.169.0.15:8443/healthz ...
	I0717 10:53:56.259708    4493 api_server.go:279] https://192.169.0.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 10:53:56.259720    4493 api_server.go:103] status: https://192.169.0.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 10:53:56.753673    4493 api_server.go:253] Checking apiserver healthz at https://192.169.0.15:8443/healthz ...
	I0717 10:53:56.756594    4493 api_server.go:279] https://192.169.0.15:8443/healthz returned 200:
	ok
	I0717 10:53:56.756652    4493 round_trippers.go:463] GET https://192.169.0.15:8443/version
	I0717 10:53:56.756657    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:56.756663    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:56.756669    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:56.761255    4493 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 10:53:56.761264    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:56.761269    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:56.761273    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:56.761276    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:56.761279    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:56.761281    4493 round_trippers.go:580]     Content-Length: 263
	I0717 10:53:56.761299    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:56 GMT
	I0717 10:53:56.761304    4493 round_trippers.go:580]     Audit-Id: 20314caf-1202-44f4-8996-bf27e6cf6969
	I0717 10:53:56.761324    4493 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0717 10:53:56.761367    4493 api_server.go:141] control plane version: v1.30.2
	I0717 10:53:56.761377    4493 api_server.go:131] duration metric: took 3.008723839s to wait for apiserver health ...
	I0717 10:53:56.761383    4493 cni.go:84] Creating CNI manager for ""
	I0717 10:53:56.761387    4493 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 10:53:56.783921    4493 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 10:53:56.805067    4493 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 10:53:56.811013    4493 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0717 10:53:56.811027    4493 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0717 10:53:56.811033    4493 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0717 10:53:56.811038    4493 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 10:53:56.811044    4493 command_runner.go:130] > Access: 2024-07-17 17:53:18.129065165 +0000
	I0717 10:53:56.811050    4493 command_runner.go:130] > Modify: 2024-07-16 21:31:18.000000000 +0000
	I0717 10:53:56.811058    4493 command_runner.go:130] > Change: 2024-07-17 17:53:16.576065081 +0000
	I0717 10:53:56.811067    4493 command_runner.go:130] >  Birth: -
	I0717 10:53:56.811289    4493 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0717 10:53:56.811297    4493 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 10:53:56.831327    4493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 10:53:57.226431    4493 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0717 10:53:57.253309    4493 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0717 10:53:57.384856    4493 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0717 10:53:57.466701    4493 command_runner.go:130] > daemonset.apps/kindnet configured
	I0717 10:53:57.468049    4493 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 10:53:57.468110    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods
	I0717 10:53:57.468115    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.468121    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.468125    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.470857    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:53:57.470869    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.470878    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.470884    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.470890    4493 round_trippers.go:580]     Audit-Id: be11fdef-3178-4b6b-9b73-af4516117470
	I0717 10:53:57.470895    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.470899    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.470903    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.471831    4493 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"769"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"766","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 87605 chars]
	I0717 10:53:57.474696    4493 system_pods.go:59] 12 kube-system pods found
	I0717 10:53:57.474714    4493 system_pods.go:61] "coredns-7db6d8ff4d-nlwxm" [d9e6c103-3eba-4549-b327-23c87ce480cd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 10:53:57.474719    4493 system_pods.go:61] "etcd-multinode-875000" [b181608e-80a7-4ef3-9702-315fe76bc83b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 10:53:57.474724    4493 system_pods.go:61] "kindnet-fnltt" [31c26a51-23d0-4f20-a716-fbe77e2d1347] Running
	I0717 10:53:57.474728    4493 system_pods.go:61] "kindnet-hwkds" [41b256d2-0784-4ebc-82a6-1d435f44924e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 10:53:57.474731    4493 system_pods.go:61] "kindnet-pj9kh" [fd101f4e-0ee3-45fa-b5ed-0957fb0c87f5] Running
	I0717 10:53:57.474735    4493 system_pods.go:61] "kube-apiserver-multinode-875000" [994530a7-11e7-4b05-95ec-c77751a6c24d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 10:53:57.474741    4493 system_pods.go:61] "kube-controller-manager-multinode-875000" [10a5876c-ddf6-4f37-82ca-96ea7ebde028] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 10:53:57.474744    4493 system_pods.go:61] "kube-proxy-dnn4j" [fd7faf4d-f212-4c89-9ac5-8e408c295411] Running
	I0717 10:53:57.474747    4493 system_pods.go:61] "kube-proxy-tp2zz" [9fda8ef7-b324-4cbb-a8d9-98f93132b2e7] Running
	I0717 10:53:57.474750    4493 system_pods.go:61] "kube-proxy-zs8f8" [9e2bce56-d9e0-42a1-a265-4aab3577b031] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 10:53:57.474755    4493 system_pods.go:61] "kube-scheduler-multinode-875000" [b2f1c23d-635b-490e-a964-c28e1566ead0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 10:53:57.474759    4493 system_pods.go:61] "storage-provisioner" [2bf95484-4db9-4dc1-80b0-b4a35569c9af] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 10:53:57.474764    4493 system_pods.go:74] duration metric: took 6.708549ms to wait for pod list to return data ...
	I0717 10:53:57.474771    4493 node_conditions.go:102] verifying NodePressure condition ...
	I0717 10:53:57.474806    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes
	I0717 10:53:57.474811    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.474816    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.474819    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.477036    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:53:57.477050    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.477058    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.477074    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.477083    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.477086    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.477089    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.477092    4493 round_trippers.go:580]     Audit-Id: b542354e-dc6e-4cf5-bb27-2e1f02e5412a
	I0717 10:53:57.477286    4493 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"769"},"items":[{"metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14802 chars]
	I0717 10:53:57.477811    4493 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:53:57.477823    4493 node_conditions.go:123] node cpu capacity is 2
	I0717 10:53:57.477832    4493 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:53:57.477835    4493 node_conditions.go:123] node cpu capacity is 2
	I0717 10:53:57.477838    4493 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:53:57.477841    4493 node_conditions.go:123] node cpu capacity is 2
	I0717 10:53:57.477844    4493 node_conditions.go:105] duration metric: took 3.069599ms to run NodePressure ...
	I0717 10:53:57.477854    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 10:53:57.638782    4493 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0717 10:53:57.765972    4493 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0717 10:53:57.767197    4493 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 10:53:57.767251    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0717 10:53:57.767256    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.767262    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.767267    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.769104    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:53:57.769115    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.769122    4493 round_trippers.go:580]     Audit-Id: 7ceb3ea3-1596-4dc0-86c1-d925082ba2a2
	I0717 10:53:57.769127    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.769131    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.769135    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.769139    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.769144    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.769518    4493 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"772"},"items":[{"metadata":{"name":"etcd-multinode-875000","namespace":"kube-system","uid":"b181608e-80a7-4ef3-9702-315fe76bc83b","resourceVersion":"764","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.15:2379","kubernetes.io/config.hash":"be038e8a848fb24f787b8a2643981714","kubernetes.io/config.mirror":"be038e8a848fb24f787b8a2643981714","kubernetes.io/config.seen":"2024-07-17T17:49:49.643438469Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30912 chars]
	I0717 10:53:57.770217    4493 kubeadm.go:739] kubelet initialised
	I0717 10:53:57.770227    4493 kubeadm.go:740] duration metric: took 3.020268ms waiting for restarted kubelet to initialise ...
	I0717 10:53:57.770235    4493 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:53:57.770264    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods
	I0717 10:53:57.770269    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.770274    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.770278    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.772341    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:53:57.772349    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.772355    4493 round_trippers.go:580]     Audit-Id: 7427f09f-9885-4387-9ebb-cc9207414853
	I0717 10:53:57.772358    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.772360    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.772363    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.772365    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.772367    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.773154    4493 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"772"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"766","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 87605 chars]
	I0717 10:53:57.775015    4493 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace to be "Ready" ...
	I0717 10:53:57.775053    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nlwxm
	I0717 10:53:57.775058    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.775064    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.775068    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.776272    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:53:57.776282    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.776289    4493 round_trippers.go:580]     Audit-Id: 5297d168-8648-4e01-8cd2-0657b77b7bc7
	I0717 10:53:57.776292    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.776295    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.776297    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.776300    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.776302    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.776563    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"766","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0717 10:53:57.776815    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:53:57.776823    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.776829    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.776834    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.782913    4493 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 10:53:57.782925    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.782931    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.782935    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.782938    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.782940    4493 round_trippers.go:580]     Audit-Id: c976c521-8bea-4439-98cc-ba7021ededb8
	I0717 10:53:57.782943    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.782945    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.783144    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:53:57.783354    4493 pod_ready.go:97] node "multinode-875000" hosting pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:57.783365    4493 pod_ready.go:81] duration metric: took 8.340564ms for pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace to be "Ready" ...
	E0717 10:53:57.783372    4493 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-875000" hosting pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:57.783377    4493 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:53:57.783411    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-875000
	I0717 10:53:57.783416    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.783422    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.783426    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.784965    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:53:57.784973    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.784978    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.784982    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.784985    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.784988    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.784991    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.784993    4493 round_trippers.go:580]     Audit-Id: 21e31e82-6db8-463b-bb85-60fb550aefb1
	I0717 10:53:57.785293    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-875000","namespace":"kube-system","uid":"b181608e-80a7-4ef3-9702-315fe76bc83b","resourceVersion":"764","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.15:2379","kubernetes.io/config.hash":"be038e8a848fb24f787b8a2643981714","kubernetes.io/config.mirror":"be038e8a848fb24f787b8a2643981714","kubernetes.io/config.seen":"2024-07-17T17:49:49.643438469Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0717 10:53:57.785542    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:53:57.785549    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.785554    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.785558    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.786881    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:53:57.786893    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.786898    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.786901    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.786904    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.786907    4493 round_trippers.go:580]     Audit-Id: 639c3cdc-0ed9-4159-a3dc-e0a5147be43b
	I0717 10:53:57.786920    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.786926    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.787108    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:53:57.787281    4493 pod_ready.go:97] node "multinode-875000" hosting pod "etcd-multinode-875000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:57.787291    4493 pod_ready.go:81] duration metric: took 3.908842ms for pod "etcd-multinode-875000" in "kube-system" namespace to be "Ready" ...
	E0717 10:53:57.787297    4493 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-875000" hosting pod "etcd-multinode-875000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:57.787308    4493 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:53:57.787337    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-875000
	I0717 10:53:57.787342    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.787347    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.787355    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.788622    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:53:57.788632    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.788637    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.788640    4493 round_trippers.go:580]     Audit-Id: b08b5c94-9e60-4160-b6a2-9a509b39286e
	I0717 10:53:57.788643    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.788646    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.788649    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.788651    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.788821    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-875000","namespace":"kube-system","uid":"994530a7-11e7-4b05-95ec-c77751a6c24d","resourceVersion":"763","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.15:8443","kubernetes.io/config.hash":"61312bb99a3953bb6d2fa540cea05ce5","kubernetes.io/config.mirror":"61312bb99a3953bb6d2fa540cea05ce5","kubernetes.io/config.seen":"2024-07-17T17:49:49.643441506Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8135 chars]
	I0717 10:53:57.789068    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:53:57.789075    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.789081    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.789086    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.790337    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:53:57.790344    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.790348    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.790351    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.790354    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.790356    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.790359    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.790363    4493 round_trippers.go:580]     Audit-Id: aea73fd6-d62c-4d66-aaf5-b2b8486da07e
	I0717 10:53:57.790523    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:53:57.790695    4493 pod_ready.go:97] node "multinode-875000" hosting pod "kube-apiserver-multinode-875000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:57.790704    4493 pod_ready.go:81] duration metric: took 3.391173ms for pod "kube-apiserver-multinode-875000" in "kube-system" namespace to be "Ready" ...
	E0717 10:53:57.790711    4493 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-875000" hosting pod "kube-apiserver-multinode-875000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:57.790716    4493 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:53:57.790748    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-875000
	I0717 10:53:57.790753    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.790758    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.790762    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.792799    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:53:57.792807    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.792813    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.792817    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.792820    4493 round_trippers.go:580]     Audit-Id: d6dad934-a3f9-4e3d-8e87-bacbb94674b4
	I0717 10:53:57.792823    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.792831    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.792835    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.793248    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-875000","namespace":"kube-system","uid":"10a5876c-ddf6-4f37-82ca-96ea7ebde028","resourceVersion":"762","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2fd022622c52536f8b0b923ecacc7ea2","kubernetes.io/config.mirror":"2fd022622c52536f8b0b923ecacc7ea2","kubernetes.io/config.seen":"2024-07-17T17:49:49.643442180Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7726 chars]
	I0717 10:53:57.868227    4493 request.go:629] Waited for 74.688427ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:53:57.868258    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:53:57.868263    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:57.868269    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:57.868274    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:57.875551    4493 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 10:53:57.875563    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:57.875568    4493 round_trippers.go:580]     Audit-Id: 6a7ba713-3120-442c-a7e6-c118812e297c
	I0717 10:53:57.875571    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:57.875573    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:57.875576    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:57.875578    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:57.875581    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:57 GMT
	I0717 10:53:57.875660    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:53:57.875854    4493 pod_ready.go:97] node "multinode-875000" hosting pod "kube-controller-manager-multinode-875000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:57.875865    4493 pod_ready.go:81] duration metric: took 85.142117ms for pod "kube-controller-manager-multinode-875000" in "kube-system" namespace to be "Ready" ...
	E0717 10:53:57.875893    4493 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-875000" hosting pod "kube-controller-manager-multinode-875000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:57.875903    4493 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dnn4j" in "kube-system" namespace to be "Ready" ...
	I0717 10:53:58.069150    4493 request.go:629] Waited for 193.177485ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dnn4j
	I0717 10:53:58.069278    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dnn4j
	I0717 10:53:58.069287    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:58.069298    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:58.069306    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:58.072012    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:53:58.072026    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:58.072033    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:58 GMT
	I0717 10:53:58.072037    4493 round_trippers.go:580]     Audit-Id: 4536672e-50b0-4b49-9609-ab98d69dfd87
	I0717 10:53:58.072040    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:58.072044    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:58.072048    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:58.072052    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:58.072185    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dnn4j","generateName":"kube-proxy-","namespace":"kube-system","uid":"fd7faf4d-f212-4c89-9ac5-8e408c295411","resourceVersion":"714","creationTimestamp":"2024-07-17T17:51:33Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8a69f9ff-027d-455b-b65a-6c9aef9936e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:51:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a69f9ff-027d-455b-b65a-6c9aef9936e7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0717 10:53:58.269376    4493 request.go:629] Waited for 196.847669ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m03
	I0717 10:53:58.269432    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m03
	I0717 10:53:58.269440    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:58.269452    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:58.269459    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:58.271840    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:53:58.271855    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:58.271862    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:58.271866    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:58.271870    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:58.271873    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:58 GMT
	I0717 10:53:58.271877    4493 round_trippers.go:580]     Audit-Id: 0196d5bf-6ac4-4fea-9bd3-70df2ee429f2
	I0717 10:53:58.271880    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:58.271984    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m03","uid":"4dcfd269-94b0-4652-bd6d-b7d938fc2b6d","resourceVersion":"741","creationTimestamp":"2024-07-17T17:52:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_52_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:52:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3641 chars]
	I0717 10:53:58.272205    4493 pod_ready.go:92] pod "kube-proxy-dnn4j" in "kube-system" namespace has status "Ready":"True"
	I0717 10:53:58.272216    4493 pod_ready.go:81] duration metric: took 396.292994ms for pod "kube-proxy-dnn4j" in "kube-system" namespace to be "Ready" ...
	I0717 10:53:58.272224    4493 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tp2zz" in "kube-system" namespace to be "Ready" ...
	I0717 10:53:58.468795    4493 request.go:629] Waited for 196.519779ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp2zz
	I0717 10:53:58.468870    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp2zz
	I0717 10:53:58.468880    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:58.468891    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:58.468899    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:58.472107    4493 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:53:58.472119    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:58.472125    4493 round_trippers.go:580]     Audit-Id: 39bff14c-6da0-496c-9dec-9d82e6504e3e
	I0717 10:53:58.472129    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:58.472132    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:58.472135    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:58.472152    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:58.472157    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:58 GMT
	I0717 10:53:58.472249    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tp2zz","generateName":"kube-proxy-","namespace":"kube-system","uid":"9fda8ef7-b324-4cbb-a8d9-98f93132b2e7","resourceVersion":"486","creationTimestamp":"2024-07-17T17:50:42Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8a69f9ff-027d-455b-b65a-6c9aef9936e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a69f9ff-027d-455b-b65a-6c9aef9936e7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0717 10:53:58.669099    4493 request.go:629] Waited for 196.505789ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:53:58.669150    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:53:58.669180    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:58.669192    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:58.669200    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:58.671782    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:53:58.671797    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:58.671805    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:58 GMT
	I0717 10:53:58.671810    4493 round_trippers.go:580]     Audit-Id: 11493781-0c12-4492-b335-044080d1446d
	I0717 10:53:58.671813    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:58.671816    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:58.671819    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:58.671823    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:58.671994    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"e92886e5-127c-42d8-b0f7-76db7895a433","resourceVersion":"553","creationTimestamp":"2024-07-17T17:50:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_50_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3824 chars]
	I0717 10:53:58.672221    4493 pod_ready.go:92] pod "kube-proxy-tp2zz" in "kube-system" namespace has status "Ready":"True"
	I0717 10:53:58.672233    4493 pod_ready.go:81] duration metric: took 399.991658ms for pod "kube-proxy-tp2zz" in "kube-system" namespace to be "Ready" ...
	I0717 10:53:58.672242    4493 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zs8f8" in "kube-system" namespace to be "Ready" ...
	I0717 10:53:58.870210    4493 request.go:629] Waited for 197.913414ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zs8f8
	I0717 10:53:58.870347    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zs8f8
	I0717 10:53:58.870359    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:58.870370    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:58.870376    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:58.872970    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:53:58.872987    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:58.872995    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:58.872999    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:58.873003    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:58 GMT
	I0717 10:53:58.873007    4493 round_trippers.go:580]     Audit-Id: ca1db5d7-ea4c-4c14-993f-721dc53ac6a0
	I0717 10:53:58.873010    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:58.873013    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:58.873094    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zs8f8","generateName":"kube-proxy-","namespace":"kube-system","uid":"9e2bce56-d9e0-42a1-a265-4aab3577b031","resourceVersion":"774","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8a69f9ff-027d-455b-b65a-6c9aef9936e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a69f9ff-027d-455b-b65a-6c9aef9936e7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0717 10:53:59.069076    4493 request.go:629] Waited for 195.635449ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:53:59.069169    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:53:59.069176    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:59.069184    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:59.069190    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:59.071286    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:53:59.071307    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:59.071313    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:59 GMT
	I0717 10:53:59.071318    4493 round_trippers.go:580]     Audit-Id: e9daa319-d3fc-4813-b7af-f68df3e30559
	I0717 10:53:59.071346    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:59.071353    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:59.071356    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:59.071359    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:59.071435    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:53:59.071638    4493 pod_ready.go:97] node "multinode-875000" hosting pod "kube-proxy-zs8f8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:59.071652    4493 pod_ready.go:81] duration metric: took 399.394541ms for pod "kube-proxy-zs8f8" in "kube-system" namespace to be "Ready" ...
	E0717 10:53:59.071660    4493 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-875000" hosting pod "kube-proxy-zs8f8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:59.071665    4493 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:53:59.269426    4493 request.go:629] Waited for 197.709805ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-875000
	I0717 10:53:59.269482    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-875000
	I0717 10:53:59.269491    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:59.269518    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:59.269586    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:59.273267    4493 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:53:59.273281    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:59.273286    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:59.273290    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:59.273294    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:59.273297    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:59 GMT
	I0717 10:53:59.273301    4493 round_trippers.go:580]     Audit-Id: b4328630-f3e1-42b8-8900-f4dbf39dfdce
	I0717 10:53:59.273304    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:59.273390    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-875000","namespace":"kube-system","uid":"b2f1c23d-635b-490e-a964-c28e1566ead0","resourceVersion":"761","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0081164f6599d019e7f075e4c8b7277","kubernetes.io/config.mirror":"e0081164f6599d019e7f075e4c8b7277","kubernetes.io/config.seen":"2024-07-17T17:49:49.643442746Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5438 chars]
	I0717 10:53:59.469363    4493 request.go:629] Waited for 195.721843ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:53:59.469531    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:53:59.469550    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:59.469565    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:59.469572    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:59.472420    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:53:59.472436    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:59.472443    4493 round_trippers.go:580]     Audit-Id: ee40a7d2-de32-4530-8134-79ca8c2b1e97
	I0717 10:53:59.472448    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:59.472452    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:59.472456    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:59.472459    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:59.472463    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:59 GMT
	I0717 10:53:59.472548    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:53:59.472792    4493 pod_ready.go:97] node "multinode-875000" hosting pod "kube-scheduler-multinode-875000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:59.472806    4493 pod_ready.go:81] duration metric: took 401.124044ms for pod "kube-scheduler-multinode-875000" in "kube-system" namespace to be "Ready" ...
	E0717 10:53:59.472814    4493 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-875000" hosting pod "kube-scheduler-multinode-875000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000" has status "Ready":"False"
	I0717 10:53:59.472820    4493 pod_ready.go:38] duration metric: took 1.702533365s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:53:59.472837    4493 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 10:53:59.484509    4493 command_runner.go:130] > -16
	I0717 10:53:59.484566    4493 ops.go:34] apiserver oom_adj: -16
	I0717 10:53:59.484574    4493 kubeadm.go:597] duration metric: took 8.726108957s to restartPrimaryControlPlane
	I0717 10:53:59.484580    4493 kubeadm.go:394] duration metric: took 8.746781959s to StartCluster
	I0717 10:53:59.484590    4493 settings.go:142] acquiring lock: {Name:mkc45f011a907c66e2dbca7dadfff37ab48f7d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:53:59.484676    4493 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:53:59.485028    4493 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:53:59.485929    4493 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.15 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:53:59.485962    4493 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 10:53:59.486078    4493 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:53:59.527264    4493 out.go:177] * Verifying Kubernetes components...
	I0717 10:53:59.570382    4493 out.go:177] * Enabled addons: 
	I0717 10:53:59.591289    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:53:59.612289    4493 addons.go:510] duration metric: took 126.33187ms for enable addons: enabled=[]
	I0717 10:53:59.740151    4493 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:53:59.753734    4493 node_ready.go:35] waiting up to 6m0s for node "multinode-875000" to be "Ready" ...
	I0717 10:53:59.753791    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:53:59.753796    4493 round_trippers.go:469] Request Headers:
	I0717 10:53:59.753802    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:53:59.753806    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:53:59.755382    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:53:59.755391    4493 round_trippers.go:577] Response Headers:
	I0717 10:53:59.755397    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:53:59.755406    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:53:59.755409    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:53:59.755411    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:53:59.755415    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:53:59 GMT
	I0717 10:53:59.755417    4493 round_trippers.go:580]     Audit-Id: 46b5f3ac-1390-41c3-9d17-19db56ef8579
	I0717 10:53:59.755583    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:00.255453    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:00.255476    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:00.255488    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:00.255496    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:00.257693    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:00.257723    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:00.257769    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:00.257783    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:00.257790    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:00.257799    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:00.257806    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:00 GMT
	I0717 10:54:00.257810    4493 round_trippers.go:580]     Audit-Id: 25ece10c-b1e8-46f7-acb4-98a0ba7f80c4
	I0717 10:54:00.258014    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:00.754085    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:00.754107    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:00.754120    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:00.754125    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:00.756392    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:00.756405    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:00.756413    4493 round_trippers.go:580]     Audit-Id: f3b9f318-4fff-4032-9861-017f3ba37862
	I0717 10:54:00.756417    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:00.756420    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:00.756423    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:00.756426    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:00.756433    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:00 GMT
	I0717 10:54:00.756610    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:01.254735    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:01.254760    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:01.254771    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:01.254779    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:01.257145    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:01.257159    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:01.257166    4493 round_trippers.go:580]     Audit-Id: e251b3a7-69fb-4223-9473-88d54919cd71
	I0717 10:54:01.257171    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:01.257176    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:01.257181    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:01.257185    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:01.257190    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:01 GMT
	I0717 10:54:01.257449    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:01.754015    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:01.754036    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:01.754048    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:01.754054    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:01.756245    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:01.756275    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:01.756293    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:01 GMT
	I0717 10:54:01.756302    4493 round_trippers.go:580]     Audit-Id: 89741957-bf55-43f0-9f9e-46c8b05fa7ae
	I0717 10:54:01.756310    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:01.756315    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:01.756321    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:01.756339    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:01.756521    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:01.756757    4493 node_ready.go:53] node "multinode-875000" has status "Ready":"False"
	I0717 10:54:02.254059    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:02.254143    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:02.254165    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:02.254173    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:02.257395    4493 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:54:02.257410    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:02.257417    4493 round_trippers.go:580]     Audit-Id: 9a205d18-8abf-468d-818c-232155c31735
	I0717 10:54:02.257433    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:02.257439    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:02.257442    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:02.257446    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:02.257451    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:02 GMT
	I0717 10:54:02.257753    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:02.754830    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:02.754843    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:02.754850    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:02.754854    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:02.756697    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:02.756715    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:02.756723    4493 round_trippers.go:580]     Audit-Id: 116c98cd-f772-4d28-a72e-2ab93e007f94
	I0717 10:54:02.756726    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:02.756729    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:02.756738    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:02.756743    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:02.756747    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:02 GMT
	I0717 10:54:02.756874    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:03.255286    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:03.255306    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:03.255319    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:03.255326    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:03.257932    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:03.257946    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:03.257953    4493 round_trippers.go:580]     Audit-Id: e4a16496-ff7d-4de5-ad6c-fb858787cf4e
	I0717 10:54:03.257957    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:03.257961    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:03.257966    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:03.257970    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:03.257975    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:03 GMT
	I0717 10:54:03.258077    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:03.754131    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:03.754186    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:03.754289    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:03.754305    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:03.756754    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:03.756775    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:03.756782    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:03 GMT
	I0717 10:54:03.756787    4493 round_trippers.go:580]     Audit-Id: 37090f35-74c4-4514-aaad-3d4684c670ad
	I0717 10:54:03.756803    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:03.756808    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:03.756813    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:03.756817    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:03.756889    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:03.757153    4493 node_ready.go:53] node "multinode-875000" has status "Ready":"False"
	I0717 10:54:04.254859    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:04.254882    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:04.254975    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:04.254984    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:04.257542    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:04.257556    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:04.257564    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:04 GMT
	I0717 10:54:04.257569    4493 round_trippers.go:580]     Audit-Id: e57b6670-108b-4e43-9146-26c87210969f
	I0717 10:54:04.257573    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:04.257577    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:04.257580    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:04.257583    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:04.257777    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:04.755386    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:04.755410    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:04.755459    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:04.755468    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:04.757807    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:04.757823    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:04.757834    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:04.757840    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:04.757844    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:04.757848    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:04 GMT
	I0717 10:54:04.757852    4493 round_trippers.go:580]     Audit-Id: 10215338-e734-4148-a946-3f9c852e0f8f
	I0717 10:54:04.757855    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:04.757974    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:05.254230    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:05.254259    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:05.254272    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:05.254278    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:05.257225    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:05.257241    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:05.257248    4493 round_trippers.go:580]     Audit-Id: ea906fc0-0e2a-4e41-acec-5fa673dcc27b
	I0717 10:54:05.257254    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:05.257258    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:05.257262    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:05.257266    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:05.257270    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:05 GMT
	I0717 10:54:05.257382    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:05.755232    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:05.755255    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:05.755266    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:05.755274    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:05.757968    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:05.757983    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:05.757990    4493 round_trippers.go:580]     Audit-Id: 4aac2d8d-5058-4a61-88ec-cc6a2ff69089
	I0717 10:54:05.757995    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:05.757999    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:05.758004    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:05.758007    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:05.758011    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:05 GMT
	I0717 10:54:05.758158    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:05.758417    4493 node_ready.go:53] node "multinode-875000" has status "Ready":"False"
	I0717 10:54:06.254111    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:06.254143    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:06.254197    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:06.254205    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:06.256826    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:06.256841    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:06.256849    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:06.256853    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:06.256856    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:06.256859    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:06 GMT
	I0717 10:54:06.256862    4493 round_trippers.go:580]     Audit-Id: 8072bf4f-9ef7-4723-ae40-96049993c191
	I0717 10:54:06.256866    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:06.256977    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:06.755692    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:06.755717    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:06.755728    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:06.755735    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:06.762687    4493 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 10:54:06.762704    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:06.762714    4493 round_trippers.go:580]     Audit-Id: efe793e6-64e8-4a1a-a1e7-f8a6763d1215
	I0717 10:54:06.762720    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:06.762725    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:06.762733    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:06.762739    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:06.762744    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:06 GMT
	I0717 10:54:06.763501    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:07.254211    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:07.254228    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:07.254236    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:07.254240    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:07.256316    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:07.256324    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:07.256330    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:07.256333    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:07.256335    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:07.256338    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:07 GMT
	I0717 10:54:07.256340    4493 round_trippers.go:580]     Audit-Id: 4f8184c2-cbb4-49e1-a13b-697efb477d7f
	I0717 10:54:07.256343    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:07.256595    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"756","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0717 10:54:07.754181    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:07.754197    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:07.754205    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:07.754211    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:07.756334    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:07.756343    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:07.756347    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:07.756351    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:07.756354    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:07.756358    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:07 GMT
	I0717 10:54:07.756363    4493 round_trippers.go:580]     Audit-Id: 0619515c-b586-4f1f-9e0c-08fb4d659c1f
	I0717 10:54:07.756366    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:07.756560    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"863","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5421 chars]
	I0717 10:54:08.254303    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:08.254319    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:08.254328    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:08.254333    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:08.256382    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:08.256391    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:08.256397    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:08 GMT
	I0717 10:54:08.256400    4493 round_trippers.go:580]     Audit-Id: e8abd655-64a1-49d4-8642-25ef654dc343
	I0717 10:54:08.256403    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:08.256412    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:08.256416    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:08.256421    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:08.256504    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"866","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0717 10:54:08.256688    4493 node_ready.go:53] node "multinode-875000" has status "Ready":"False"
	I0717 10:54:08.754929    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:08.754949    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:08.754961    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:08.754967    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:08.758213    4493 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:54:08.758235    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:08.758249    4493 round_trippers.go:580]     Audit-Id: 7972f044-cdf4-49a3-8a3d-625257bc3f8a
	I0717 10:54:08.758254    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:08.758258    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:08.758262    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:08.758301    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:08.758309    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:08 GMT
	I0717 10:54:08.758635    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"866","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0717 10:54:09.255617    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:09.255644    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:09.255713    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:09.255725    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:09.258267    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:09.258284    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:09.258293    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:09.258312    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:09 GMT
	I0717 10:54:09.258318    4493 round_trippers.go:580]     Audit-Id: 5f3509ed-2ccb-410e-8369-07df94c46387
	I0717 10:54:09.258322    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:09.258325    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:09.258348    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:09.258942    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"866","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0717 10:54:09.754128    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:09.754139    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:09.754145    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:09.754148    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:09.755581    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:09.755591    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:09.755595    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:09.755613    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:09.755628    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:09 GMT
	I0717 10:54:09.755638    4493 round_trippers.go:580]     Audit-Id: 25fbb815-9cb9-4e6f-b484-358d12aa1b97
	I0717 10:54:09.755647    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:09.755652    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:09.755740    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"866","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5516 chars]
	I0717 10:54:10.254758    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:10.254791    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:10.254803    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:10.254811    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:10.257339    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:10.257356    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:10.257364    4493 round_trippers.go:580]     Audit-Id: 63e8fdd7-2ee9-4d35-bf4d-e13f2a8e7298
	I0717 10:54:10.257369    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:10.257372    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:10.257376    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:10.257380    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:10.257383    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:10 GMT
	I0717 10:54:10.257497    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"874","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0717 10:54:10.257747    4493 node_ready.go:49] node "multinode-875000" has status "Ready":"True"
	I0717 10:54:10.257763    4493 node_ready.go:38] duration metric: took 10.503727197s for node "multinode-875000" to be "Ready" ...
	I0717 10:54:10.257771    4493 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:54:10.257813    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods
	I0717 10:54:10.257819    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:10.257826    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:10.257832    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:10.260186    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:10.260197    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:10.260202    4493 round_trippers.go:580]     Audit-Id: 07867cca-d61d-41af-a776-f046997c3879
	I0717 10:54:10.260207    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:10.260211    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:10.260214    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:10.260218    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:10.260223    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:10 GMT
	I0717 10:54:10.261217    4493 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"874"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"766","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 86544 chars]
	I0717 10:54:10.263021    4493 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:10.263058    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nlwxm
	I0717 10:54:10.263062    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:10.263068    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:10.263072    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:10.264134    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:10.264141    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:10.264145    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:10.264149    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:10 GMT
	I0717 10:54:10.264152    4493 round_trippers.go:580]     Audit-Id: 7ac35786-0221-4baa-a577-4b3196cea35f
	I0717 10:54:10.264155    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:10.264157    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:10.264166    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:10.264311    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"766","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0717 10:54:10.264544    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:10.264551    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:10.264556    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:10.264559    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:10.265861    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:10.265869    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:10.265877    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:10.265881    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:10.265885    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:10 GMT
	I0717 10:54:10.265888    4493 round_trippers.go:580]     Audit-Id: 82dd2eae-9d22-44f3-aeaa-831bc057e4b6
	I0717 10:54:10.265891    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:10.265895    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:10.266054    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"874","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0717 10:54:10.763930    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nlwxm
	I0717 10:54:10.763951    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:10.763963    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:10.763969    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:10.766465    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:10.766478    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:10.766504    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:10.766517    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:10.766524    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:10 GMT
	I0717 10:54:10.766532    4493 round_trippers.go:580]     Audit-Id: 639f1710-1c3b-454c-a996-ffd9332bba25
	I0717 10:54:10.766538    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:10.766544    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:10.766747    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"766","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0717 10:54:10.767149    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:10.767159    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:10.767167    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:10.767172    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:10.768554    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:10.768562    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:10.768566    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:10.768571    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:10.768576    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:10 GMT
	I0717 10:54:10.768580    4493 round_trippers.go:580]     Audit-Id: 71e424b7-052c-4661-8537-426df69d70bd
	I0717 10:54:10.768585    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:10.768588    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:10.768726    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"874","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0717 10:54:11.264901    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nlwxm
	I0717 10:54:11.264931    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:11.264945    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:11.264951    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:11.267695    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:11.267711    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:11.267718    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:11.267722    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:11.267727    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:11.267730    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:11.267734    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:11 GMT
	I0717 10:54:11.267737    4493 round_trippers.go:580]     Audit-Id: f4529cd1-ed9e-424c-a40c-ef0c63483fc1
	I0717 10:54:11.267816    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"766","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0717 10:54:11.268182    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:11.268191    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:11.268200    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:11.268204    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:11.269426    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:11.269437    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:11.269444    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:11.269457    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:11.269463    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:11.269467    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:11 GMT
	I0717 10:54:11.269469    4493 round_trippers.go:580]     Audit-Id: 6d01dc6f-b546-4d5f-98ad-8758a2bd0883
	I0717 10:54:11.269472    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:11.269588    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"874","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0717 10:54:11.763239    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nlwxm
	I0717 10:54:11.763259    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:11.763267    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:11.763273    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:11.781636    4493 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0717 10:54:11.781648    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:11.781653    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:11.781657    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:11.781659    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:11.781661    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:11 GMT
	I0717 10:54:11.781665    4493 round_trippers.go:580]     Audit-Id: 0cc8d777-8a0f-4cc8-aec4-50dd131b8dcc
	I0717 10:54:11.781667    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:11.781821    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"766","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0717 10:54:11.782102    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:11.782109    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:11.782115    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:11.782118    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:11.783465    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:11.783476    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:11.783481    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:11 GMT
	I0717 10:54:11.783485    4493 round_trippers.go:580]     Audit-Id: 11b00e20-fa3e-4583-873b-fb18bc000c5f
	I0717 10:54:11.783489    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:11.783491    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:11.783495    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:11.783504    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:11.783737    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"874","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0717 10:54:12.263974    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nlwxm
	I0717 10:54:12.263997    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:12.264008    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:12.264014    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:12.266338    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:12.266349    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:12.266356    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:12.266360    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:12.266365    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:12 GMT
	I0717 10:54:12.266369    4493 round_trippers.go:580]     Audit-Id: e87ccaf0-d00c-41fd-8c96-4af67af66ae5
	I0717 10:54:12.266374    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:12.266377    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:12.266664    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"766","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0717 10:54:12.266936    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:12.266943    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:12.266949    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:12.266953    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:12.268163    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:12.268172    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:12.268177    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:12.268180    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:12.268184    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:12.268186    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:12 GMT
	I0717 10:54:12.268190    4493 round_trippers.go:580]     Audit-Id: af4810ca-5ed3-4340-86e5-a55da617acac
	I0717 10:54:12.268193    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:12.268265    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"874","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5293 chars]
	I0717 10:54:12.268439    4493 pod_ready.go:102] pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace has status "Ready":"False"
	I0717 10:54:12.763313    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nlwxm
	I0717 10:54:12.763326    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:12.763332    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:12.763337    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:12.765317    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:12.765328    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:12.765333    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:12.765340    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:12 GMT
	I0717 10:54:12.765344    4493 round_trippers.go:580]     Audit-Id: 64ee0eff-0ef6-4b21-a7d2-f58f3cde573b
	I0717 10:54:12.765347    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:12.765352    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:12.765355    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:12.765527    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"766","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0717 10:54:12.765805    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:12.765812    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:12.765818    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:12.765822    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:12.774292    4493 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 10:54:12.774304    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:12.774309    4493 round_trippers.go:580]     Audit-Id: 2078b5f7-f16e-4a6b-b755-fe07f89a7880
	I0717 10:54:12.774313    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:12.774315    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:12.774332    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:12.774339    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:12.774341    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:12 GMT
	I0717 10:54:12.774457    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:54:13.263384    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nlwxm
	I0717 10:54:13.263405    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.263417    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.263423    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.265900    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:13.265912    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.265919    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.265946    4493 round_trippers.go:580]     Audit-Id: dea831ee-8530-481a-80eb-5da3319467b4
	I0717 10:54:13.265956    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.265961    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.265965    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.265975    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.266129    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"895","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6783 chars]
	I0717 10:54:13.266490    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:13.266497    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.266503    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.266506    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.267669    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:13.267677    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.267682    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.267684    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.267687    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.267691    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.267694    4493 round_trippers.go:580]     Audit-Id: bbfe4b42-85c9-439c-8d92-08e6f6a64ee5
	I0717 10:54:13.267697    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.267986    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:54:13.268154    4493 pod_ready.go:92] pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace has status "Ready":"True"
	I0717 10:54:13.268163    4493 pod_ready.go:81] duration metric: took 3.005051569s for pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.268172    4493 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.268203    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-875000
	I0717 10:54:13.268207    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.268213    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.268217    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.269265    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:13.269274    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.269279    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.269283    4493 round_trippers.go:580]     Audit-Id: 73a01a19-0d1f-449f-b94e-c4171f6e316f
	I0717 10:54:13.269285    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.269288    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.269292    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.269295    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.269400    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-875000","namespace":"kube-system","uid":"b181608e-80a7-4ef3-9702-315fe76bc83b","resourceVersion":"868","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.15:2379","kubernetes.io/config.hash":"be038e8a848fb24f787b8a2643981714","kubernetes.io/config.mirror":"be038e8a848fb24f787b8a2643981714","kubernetes.io/config.seen":"2024-07-17T17:49:49.643438469Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6357 chars]
	I0717 10:54:13.269623    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:13.269630    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.269636    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.269639    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.270650    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:54:13.270657    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.270663    4493 round_trippers.go:580]     Audit-Id: 4c5f293c-35dd-434f-b207-81722a7d3607
	I0717 10:54:13.270666    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.270669    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.270671    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.270674    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.270678    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.270867    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:54:13.271029    4493 pod_ready.go:92] pod "etcd-multinode-875000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:54:13.271037    4493 pod_ready.go:81] duration metric: took 2.859825ms for pod "etcd-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.271048    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.271074    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-875000
	I0717 10:54:13.271078    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.271083    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.271086    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.272092    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:54:13.272099    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.272103    4493 round_trippers.go:580]     Audit-Id: 46275850-dae1-44fa-bb5a-d0ae062b1988
	I0717 10:54:13.272107    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.272110    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.272120    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.272123    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.272125    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.272245    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-875000","namespace":"kube-system","uid":"994530a7-11e7-4b05-95ec-c77751a6c24d","resourceVersion":"872","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.15:8443","kubernetes.io/config.hash":"61312bb99a3953bb6d2fa540cea05ce5","kubernetes.io/config.mirror":"61312bb99a3953bb6d2fa540cea05ce5","kubernetes.io/config.seen":"2024-07-17T17:49:49.643441506Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0717 10:54:13.272462    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:13.272469    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.272475    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.272479    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.273341    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:54:13.273350    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.273359    4493 round_trippers.go:580]     Audit-Id: 3988e007-6201-47dc-b623-0b3930a1efd3
	I0717 10:54:13.273364    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.273369    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.273372    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.273377    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.273386    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.273509    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:54:13.273672    4493 pod_ready.go:92] pod "kube-apiserver-multinode-875000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:54:13.273680    4493 pod_ready.go:81] duration metric: took 2.627644ms for pod "kube-apiserver-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.273686    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.273713    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-875000
	I0717 10:54:13.273718    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.273723    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.273727    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.274692    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:54:13.274702    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.274709    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.274713    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.274715    4493 round_trippers.go:580]     Audit-Id: 1c515b9e-1497-4f78-b89a-367c2ae6ba35
	I0717 10:54:13.274733    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.274737    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.274741    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.274841    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-875000","namespace":"kube-system","uid":"10a5876c-ddf6-4f37-82ca-96ea7ebde028","resourceVersion":"875","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2fd022622c52536f8b0b923ecacc7ea2","kubernetes.io/config.mirror":"2fd022622c52536f8b0b923ecacc7ea2","kubernetes.io/config.seen":"2024-07-17T17:49:49.643442180Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0717 10:54:13.275068    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:13.275075    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.275081    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.275084    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.275968    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:54:13.275974    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.275978    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.275982    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.275984    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.275988    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.275991    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.275994    4493 round_trippers.go:580]     Audit-Id: f03b2211-9d94-4ef0-ba6b-bcb945afcb10
	I0717 10:54:13.276086    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:54:13.276244    4493 pod_ready.go:92] pod "kube-controller-manager-multinode-875000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:54:13.276251    4493 pod_ready.go:81] duration metric: took 2.559695ms for pod "kube-controller-manager-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.276258    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dnn4j" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.276284    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dnn4j
	I0717 10:54:13.276289    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.276295    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.276298    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.277072    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:54:13.277078    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.277082    4493 round_trippers.go:580]     Audit-Id: 95cf34fc-1e1b-4a95-be0f-8ea41b1d3af3
	I0717 10:54:13.277086    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.277089    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.277095    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.277098    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.277101    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.277277    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dnn4j","generateName":"kube-proxy-","namespace":"kube-system","uid":"fd7faf4d-f212-4c89-9ac5-8e408c295411","resourceVersion":"714","creationTimestamp":"2024-07-17T17:51:33Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8a69f9ff-027d-455b-b65a-6c9aef9936e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:51:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a69f9ff-027d-455b-b65a-6c9aef9936e7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0717 10:54:13.277490    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m03
	I0717 10:54:13.277497    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.277503    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.277506    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.278360    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:54:13.278367    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.278372    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.278376    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.278379    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.278382    4493 round_trippers.go:580]     Audit-Id: 4ccbd8da-3f86-4bed-a7d3-60c99729f14a
	I0717 10:54:13.278387    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.278390    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.278486    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m03","uid":"4dcfd269-94b0-4652-bd6d-b7d938fc2b6d","resourceVersion":"741","creationTimestamp":"2024-07-17T17:52:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_52_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:52:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3641 chars]
	I0717 10:54:13.278623    4493 pod_ready.go:92] pod "kube-proxy-dnn4j" in "kube-system" namespace has status "Ready":"True"
	I0717 10:54:13.278630    4493 pod_ready.go:81] duration metric: took 2.368204ms for pod "kube-proxy-dnn4j" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.278637    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tp2zz" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.464644    4493 request.go:629] Waited for 185.945191ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp2zz
	I0717 10:54:13.464688    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp2zz
	I0717 10:54:13.464696    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.464709    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.464720    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.467132    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:13.467146    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.467156    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.467164    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.467171    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.467176    4493 round_trippers.go:580]     Audit-Id: ea4bef63-f5a4-4592-b6b7-6c8be4654625
	I0717 10:54:13.467183    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.467186    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.467358    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tp2zz","generateName":"kube-proxy-","namespace":"kube-system","uid":"9fda8ef7-b324-4cbb-a8d9-98f93132b2e7","resourceVersion":"486","creationTimestamp":"2024-07-17T17:50:42Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8a69f9ff-027d-455b-b65a-6c9aef9936e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a69f9ff-027d-455b-b65a-6c9aef9936e7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0717 10:54:13.663665    4493 request.go:629] Waited for 195.975703ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:54:13.663719    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:54:13.663729    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.663742    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.663749    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.666750    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:13.666761    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.666768    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.666772    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.666777    4493 round_trippers.go:580]     Audit-Id: 6dca5e6e-7ea6-4165-8f45-6234c65ce6ef
	I0717 10:54:13.666781    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.666786    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.666789    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.667152    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"e92886e5-127c-42d8-b0f7-76db7895a433","resourceVersion":"553","creationTimestamp":"2024-07-17T17:50:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_50_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3824 chars]
	I0717 10:54:13.667319    4493 pod_ready.go:92] pod "kube-proxy-tp2zz" in "kube-system" namespace has status "Ready":"True"
	I0717 10:54:13.667328    4493 pod_ready.go:81] duration metric: took 388.674095ms for pod "kube-proxy-tp2zz" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.667334    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zs8f8" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:13.865064    4493 request.go:629] Waited for 197.643779ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zs8f8
	I0717 10:54:13.865205    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zs8f8
	I0717 10:54:13.865214    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:13.865228    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:13.865236    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:13.868015    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:13.868027    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:13.868035    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:13.868039    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:13.868042    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:13.868046    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:13.868049    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:13 GMT
	I0717 10:54:13.868052    4493 round_trippers.go:580]     Audit-Id: e223f844-6428-4437-ba2c-aa4b12136065
	I0717 10:54:13.868219    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zs8f8","generateName":"kube-proxy-","namespace":"kube-system","uid":"9e2bce56-d9e0-42a1-a265-4aab3577b031","resourceVersion":"774","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8a69f9ff-027d-455b-b65a-6c9aef9936e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a69f9ff-027d-455b-b65a-6c9aef9936e7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0717 10:54:14.064048    4493 request.go:629] Waited for 195.483185ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:14.064180    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:14.064188    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:14.064196    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:14.064202    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:14.066888    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:14.066898    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:14.066903    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:14.066908    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:14.066913    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:14 GMT
	I0717 10:54:14.066917    4493 round_trippers.go:580]     Audit-Id: 1afc25f1-31c7-49a8-a875-9ca832383835
	I0717 10:54:14.066924    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:14.066928    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:14.067005    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:54:14.067189    4493 pod_ready.go:92] pod "kube-proxy-zs8f8" in "kube-system" namespace has status "Ready":"True"
	I0717 10:54:14.067198    4493 pod_ready.go:81] duration metric: took 399.84786ms for pod "kube-proxy-zs8f8" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:14.067205    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:14.263903    4493 request.go:629] Waited for 196.644333ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-875000
	I0717 10:54:14.264000    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-875000
	I0717 10:54:14.264010    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:14.264020    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:14.264027    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:14.266779    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:14.266795    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:14.266802    4493 round_trippers.go:580]     Audit-Id: 7ca8298e-839c-4aba-84ea-dffab2142eef
	I0717 10:54:14.266808    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:14.266815    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:14.266818    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:14.266821    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:14.266825    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:14 GMT
	I0717 10:54:14.266915    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-875000","namespace":"kube-system","uid":"b2f1c23d-635b-490e-a964-c28e1566ead0","resourceVersion":"877","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0081164f6599d019e7f075e4c8b7277","kubernetes.io/config.mirror":"e0081164f6599d019e7f075e4c8b7277","kubernetes.io/config.seen":"2024-07-17T17:49:49.643442746Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0717 10:54:14.464779    4493 request.go:629] Waited for 197.568211ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:14.464861    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:54:14.464873    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:14.464884    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:14.464891    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:14.467251    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:14.467270    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:14.467280    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:14.467286    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:14.467291    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:14.467296    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:14.467301    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:14 GMT
	I0717 10:54:14.467308    4493 round_trippers.go:580]     Audit-Id: e840e5b4-4625-4177-8f0d-ce3feb728bc4
	I0717 10:54:14.467505    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:54:14.467750    4493 pod_ready.go:92] pod "kube-scheduler-multinode-875000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:54:14.467761    4493 pod_ready.go:81] duration metric: took 400.535511ms for pod "kube-scheduler-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:54:14.467770    4493 pod_ready.go:38] duration metric: took 4.209877934s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:54:14.467784    4493 api_server.go:52] waiting for apiserver process to appear ...
	I0717 10:54:14.467850    4493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:54:14.480570    4493 command_runner.go:130] > 1676
	I0717 10:54:14.480661    4493 api_server.go:72] duration metric: took 14.994312803s to wait for apiserver process to appear ...
	I0717 10:54:14.480671    4493 api_server.go:88] waiting for apiserver healthz status ...
	I0717 10:54:14.480681    4493 api_server.go:253] Checking apiserver healthz at https://192.169.0.15:8443/healthz ...
	I0717 10:54:14.484062    4493 api_server.go:279] https://192.169.0.15:8443/healthz returned 200:
	ok
	I0717 10:54:14.484095    4493 round_trippers.go:463] GET https://192.169.0.15:8443/version
	I0717 10:54:14.484100    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:14.484116    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:14.484122    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:14.484530    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:54:14.484536    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:14.484541    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:14.484544    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:14.484547    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:14.484565    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:14.484572    4493 round_trippers.go:580]     Content-Length: 263
	I0717 10:54:14.484575    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:14 GMT
	I0717 10:54:14.484579    4493 round_trippers.go:580]     Audit-Id: 34880f8e-1473-47f5-8b2a-9d08ec58e191
	I0717 10:54:14.484587    4493 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0717 10:54:14.484609    4493 api_server.go:141] control plane version: v1.30.2
	I0717 10:54:14.484617    4493 api_server.go:131] duration metric: took 3.941657ms to wait for apiserver health ...
	I0717 10:54:14.484622    4493 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 10:54:14.663731    4493 request.go:629] Waited for 179.021176ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods
	I0717 10:54:14.663779    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods
	I0717 10:54:14.663787    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:14.663797    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:14.663803    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:14.667437    4493 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:54:14.667448    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:14.667453    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:14.667458    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:14.667461    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:14 GMT
	I0717 10:54:14.667463    4493 round_trippers.go:580]     Audit-Id: b65801a4-2b15-427a-9824-3de9a8975246
	I0717 10:54:14.667465    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:14.667467    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:14.668120    4493 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"900"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"895","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85985 chars]
	I0717 10:54:14.669910    4493 system_pods.go:59] 12 kube-system pods found
	I0717 10:54:14.669920    4493 system_pods.go:61] "coredns-7db6d8ff4d-nlwxm" [d9e6c103-3eba-4549-b327-23c87ce480cd] Running
	I0717 10:54:14.669923    4493 system_pods.go:61] "etcd-multinode-875000" [b181608e-80a7-4ef3-9702-315fe76bc83b] Running
	I0717 10:54:14.669926    4493 system_pods.go:61] "kindnet-fnltt" [31c26a51-23d0-4f20-a716-fbe77e2d1347] Running
	I0717 10:54:14.669928    4493 system_pods.go:61] "kindnet-hwkds" [41b256d2-0784-4ebc-82a6-1d435f44924e] Running
	I0717 10:54:14.669931    4493 system_pods.go:61] "kindnet-pj9kh" [fd101f4e-0ee3-45fa-b5ed-0957fb0c87f5] Running
	I0717 10:54:14.669933    4493 system_pods.go:61] "kube-apiserver-multinode-875000" [994530a7-11e7-4b05-95ec-c77751a6c24d] Running
	I0717 10:54:14.669936    4493 system_pods.go:61] "kube-controller-manager-multinode-875000" [10a5876c-ddf6-4f37-82ca-96ea7ebde028] Running
	I0717 10:54:14.669939    4493 system_pods.go:61] "kube-proxy-dnn4j" [fd7faf4d-f212-4c89-9ac5-8e408c295411] Running
	I0717 10:54:14.669941    4493 system_pods.go:61] "kube-proxy-tp2zz" [9fda8ef7-b324-4cbb-a8d9-98f93132b2e7] Running
	I0717 10:54:14.669943    4493 system_pods.go:61] "kube-proxy-zs8f8" [9e2bce56-d9e0-42a1-a265-4aab3577b031] Running
	I0717 10:54:14.669946    4493 system_pods.go:61] "kube-scheduler-multinode-875000" [b2f1c23d-635b-490e-a964-c28e1566ead0] Running
	I0717 10:54:14.669949    4493 system_pods.go:61] "storage-provisioner" [2bf95484-4db9-4dc1-80b0-b4a35569c9af] Running
	I0717 10:54:14.669953    4493 system_pods.go:74] duration metric: took 185.321479ms to wait for pod list to return data ...
	I0717 10:54:14.669958    4493 default_sa.go:34] waiting for default service account to be created ...
	I0717 10:54:14.863864    4493 request.go:629] Waited for 193.778992ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/default/serviceaccounts
	I0717 10:54:14.863916    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/default/serviceaccounts
	I0717 10:54:14.863925    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:14.863936    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:14.863945    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:14.866362    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:14.866377    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:14.866385    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:14.866389    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:14.866393    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:14.866398    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:14.866402    4493 round_trippers.go:580]     Content-Length: 261
	I0717 10:54:14.866407    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:14 GMT
	I0717 10:54:14.866411    4493 round_trippers.go:580]     Audit-Id: 2c02ca95-a798-4ded-8a08-3ad5eb3f92db
	I0717 10:54:14.866426    4493 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"900"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"beced86b-963a-4d04-b8e2-f402ded37dee","resourceVersion":"334","creationTimestamp":"2024-07-17T17:50:04Z"}}]}
	I0717 10:54:14.866566    4493 default_sa.go:45] found service account: "default"
	I0717 10:54:14.866579    4493 default_sa.go:55] duration metric: took 196.609666ms for default service account to be created ...
	I0717 10:54:14.866586    4493 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 10:54:15.065542    4493 request.go:629] Waited for 198.888032ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods
	I0717 10:54:15.065691    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods
	I0717 10:54:15.065703    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:15.065714    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:15.065720    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:15.069666    4493 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:54:15.069681    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:15.069688    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:15.069692    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:15 GMT
	I0717 10:54:15.069697    4493 round_trippers.go:580]     Audit-Id: f02a4964-b8b8-451f-95cc-d7d65087f49f
	I0717 10:54:15.069702    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:15.069706    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:15.069710    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:15.070314    4493 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"900"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"895","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85985 chars]
	I0717 10:54:15.072120    4493 system_pods.go:86] 12 kube-system pods found
	I0717 10:54:15.072131    4493 system_pods.go:89] "coredns-7db6d8ff4d-nlwxm" [d9e6c103-3eba-4549-b327-23c87ce480cd] Running
	I0717 10:54:15.072135    4493 system_pods.go:89] "etcd-multinode-875000" [b181608e-80a7-4ef3-9702-315fe76bc83b] Running
	I0717 10:54:15.072139    4493 system_pods.go:89] "kindnet-fnltt" [31c26a51-23d0-4f20-a716-fbe77e2d1347] Running
	I0717 10:54:15.072142    4493 system_pods.go:89] "kindnet-hwkds" [41b256d2-0784-4ebc-82a6-1d435f44924e] Running
	I0717 10:54:15.072145    4493 system_pods.go:89] "kindnet-pj9kh" [fd101f4e-0ee3-45fa-b5ed-0957fb0c87f5] Running
	I0717 10:54:15.072148    4493 system_pods.go:89] "kube-apiserver-multinode-875000" [994530a7-11e7-4b05-95ec-c77751a6c24d] Running
	I0717 10:54:15.072152    4493 system_pods.go:89] "kube-controller-manager-multinode-875000" [10a5876c-ddf6-4f37-82ca-96ea7ebde028] Running
	I0717 10:54:15.072156    4493 system_pods.go:89] "kube-proxy-dnn4j" [fd7faf4d-f212-4c89-9ac5-8e408c295411] Running
	I0717 10:54:15.072159    4493 system_pods.go:89] "kube-proxy-tp2zz" [9fda8ef7-b324-4cbb-a8d9-98f93132b2e7] Running
	I0717 10:54:15.072162    4493 system_pods.go:89] "kube-proxy-zs8f8" [9e2bce56-d9e0-42a1-a265-4aab3577b031] Running
	I0717 10:54:15.072167    4493 system_pods.go:89] "kube-scheduler-multinode-875000" [b2f1c23d-635b-490e-a964-c28e1566ead0] Running
	I0717 10:54:15.072170    4493 system_pods.go:89] "storage-provisioner" [2bf95484-4db9-4dc1-80b0-b4a35569c9af] Running
	I0717 10:54:15.072175    4493 system_pods.go:126] duration metric: took 205.57941ms to wait for k8s-apps to be running ...
	I0717 10:54:15.072185    4493 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 10:54:15.072235    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:54:15.084233    4493 system_svc.go:56] duration metric: took 12.047019ms WaitForService to wait for kubelet
	I0717 10:54:15.084251    4493 kubeadm.go:582] duration metric: took 15.5978861s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:54:15.084263    4493 node_conditions.go:102] verifying NodePressure condition ...
	I0717 10:54:15.263938    4493 request.go:629] Waited for 179.547286ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes
	I0717 10:54:15.263981    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes
	I0717 10:54:15.263989    4493 round_trippers.go:469] Request Headers:
	I0717 10:54:15.264006    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:54:15.264015    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:54:15.266530    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:54:15.266542    4493 round_trippers.go:577] Response Headers:
	I0717 10:54:15.266548    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:54:15 GMT
	I0717 10:54:15.266552    4493 round_trippers.go:580]     Audit-Id: 78d29157-5004-4eb6-a99e-8177f6794cd0
	I0717 10:54:15.266556    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:54:15.266560    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:54:15.266564    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:54:15.266568    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:54:15.266832    4493 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"900"},"items":[{"metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14675 chars]
	I0717 10:54:15.267346    4493 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:54:15.267358    4493 node_conditions.go:123] node cpu capacity is 2
	I0717 10:54:15.267367    4493 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:54:15.267372    4493 node_conditions.go:123] node cpu capacity is 2
	I0717 10:54:15.267379    4493 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:54:15.267382    4493 node_conditions.go:123] node cpu capacity is 2
	I0717 10:54:15.267387    4493 node_conditions.go:105] duration metric: took 183.115174ms to run NodePressure ...
	I0717 10:54:15.267398    4493 start.go:241] waiting for startup goroutines ...
	I0717 10:54:15.267406    4493 start.go:246] waiting for cluster config update ...
	I0717 10:54:15.267414    4493 start.go:255] writing updated cluster config ...
	I0717 10:54:15.288493    4493 out.go:177] 
	I0717 10:54:15.310166    4493 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:54:15.310253    4493 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/config.json ...
	I0717 10:54:15.332942    4493 out.go:177] * Starting "multinode-875000-m02" worker node in "multinode-875000" cluster
	I0717 10:54:15.374937    4493 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:54:15.374972    4493 cache.go:56] Caching tarball of preloaded images
	I0717 10:54:15.375171    4493 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:54:15.375189    4493 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:54:15.375313    4493 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/config.json ...
	I0717 10:54:15.376428    4493 start.go:360] acquireMachinesLock for multinode-875000-m02: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:54:15.376547    4493 start.go:364] duration metric: took 98.697µs to acquireMachinesLock for "multinode-875000-m02"
	I0717 10:54:15.376565    4493 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:54:15.376571    4493 fix.go:54] fixHost starting: m02
	I0717 10:54:15.376903    4493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:54:15.376927    4493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:54:15.385815    4493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53197
	I0717 10:54:15.386155    4493 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:54:15.386521    4493 main.go:141] libmachine: Using API Version  1
	I0717 10:54:15.386536    4493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:54:15.386770    4493 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:54:15.386894    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	I0717 10:54:15.386981    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetState
	I0717 10:54:15.387057    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:54:15.387155    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | hyperkit pid from json: 4164
	I0717 10:54:15.388061    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | hyperkit pid 4164 missing from process table
	I0717 10:54:15.388095    4493 fix.go:112] recreateIfNeeded on multinode-875000-m02: state=Stopped err=<nil>
	I0717 10:54:15.388108    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	W0717 10:54:15.388190    4493 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:54:15.409078    4493 out.go:177] * Restarting existing hyperkit VM for "multinode-875000-m02" ...
	I0717 10:54:15.452099    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .Start
	I0717 10:54:15.452333    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:54:15.452363    4493 main.go:141] libmachine: (multinode-875000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/hyperkit.pid
	I0717 10:54:15.453684    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | hyperkit pid 4164 missing from process table
	I0717 10:54:15.453700    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | pid 4164 is in state "Stopped"
	I0717 10:54:15.453720    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/hyperkit.pid...
	I0717 10:54:15.453950    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | Using UUID 25304374-eb81-4156-982c-d8f8ac747f78
	I0717 10:54:15.478721    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | Generated MAC de:84:ef:f1:8f:c7
	I0717 10:54:15.478745    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-875000
	I0717 10:54:15.478878    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"25304374-eb81-4156-982c-d8f8ac747f78", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aad20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0717 10:54:15.478921    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"25304374-eb81-4156-982c-d8f8ac747f78", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aad20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0717 10:54:15.478968    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "25304374-eb81-4156-982c-d8f8ac747f78", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/multinode-875000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/bzimage,/Users/j
enkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-875000"}
	I0717 10:54:15.479012    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 25304374-eb81-4156-982c-d8f8ac747f78 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/multinode-875000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/mult
inode-875000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-875000"
	I0717 10:54:15.479039    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:54:15.480395    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 DEBUG: hyperkit: Pid is 4537
	I0717 10:54:15.480831    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | Attempt 0
	I0717 10:54:15.480842    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:54:15.480973    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | hyperkit pid from json: 4537
	I0717 10:54:15.482820    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | Searching for de:84:ef:f1:8f:c7 in /var/db/dhcpd_leases ...
	I0717 10:54:15.482892    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | Found 16 entries in /var/db/dhcpd_leases!
	I0717 10:54:15.482929    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:c1:c6:6d:b5:4e ID:1,92:c1:c6:6d:b5:4e Lease:0x6699568d}
	I0717 10:54:15.482950    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:a2:dd:4c:c6:bd:14 ID:1,a2:dd:4c:c6:bd:14 Lease:0x669804f2}
	I0717 10:54:15.482968    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:de:84:ef:f1:8f:c7 ID:1,de:84:ef:f1:8f:c7 Lease:0x669955e8}
	I0717 10:54:15.482977    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | Found match: de:84:ef:f1:8f:c7
	I0717 10:54:15.482986    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | IP: 192.169.0.16
	I0717 10:54:15.482992    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetConfigRaw
	I0717 10:54:15.483656    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetIP
	I0717 10:54:15.483904    4493 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/config.json ...
	I0717 10:54:15.484413    4493 machine.go:94] provisionDockerMachine start ...
	I0717 10:54:15.484424    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	I0717 10:54:15.484552    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:15.484671    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:15.484775    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:15.484875    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:15.484962    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:15.485082    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:54:15.485237    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0717 10:54:15.485246    4493 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:54:15.488026    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:54:15.496175    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:54:15.497553    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:54:15.497569    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:54:15.497580    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:54:15.497589    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:54:15.879013    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:54:15.879027    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:54:15.993844    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:54:15.993862    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:54:15.993871    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:54:15.993879    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:54:15.994681    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:54:15.994690    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:54:21.256495    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:21 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:54:21.256562    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:21 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:54:21.256573    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:21 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:54:21.280111    4493 main.go:141] libmachine: (multinode-875000-m02) DBG | 2024/07/17 10:54:21 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:54:50.547294    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:54:50.547311    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetMachineName
	I0717 10:54:50.547440    4493 buildroot.go:166] provisioning hostname "multinode-875000-m02"
	I0717 10:54:50.547452    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetMachineName
	I0717 10:54:50.547549    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:50.547628    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:50.547725    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.547806    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.547894    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:50.548021    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:54:50.548160    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0717 10:54:50.548168    4493 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-875000-m02 && echo "multinode-875000-m02" | sudo tee /etc/hostname
	I0717 10:54:50.608395    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-875000-m02
	
	I0717 10:54:50.608420    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:50.608546    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:50.608639    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.608717    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.608801    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:50.608944    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:54:50.609098    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0717 10:54:50.609110    4493 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-875000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-875000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-875000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:54:50.667332    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:54:50.667354    4493 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:54:50.667369    4493 buildroot.go:174] setting up certificates
	I0717 10:54:50.667375    4493 provision.go:84] configureAuth start
	I0717 10:54:50.667383    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetMachineName
	I0717 10:54:50.667509    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetIP
	I0717 10:54:50.667618    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:50.667712    4493 provision.go:143] copyHostCerts
	I0717 10:54:50.667740    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:54:50.667790    4493 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:54:50.667796    4493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:54:50.668026    4493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:54:50.668269    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:54:50.668302    4493 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:54:50.668307    4493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:54:50.668430    4493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:54:50.668592    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:54:50.668623    4493 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:54:50.668628    4493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:54:50.668734    4493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:54:50.668897    4493 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.multinode-875000-m02 san=[127.0.0.1 192.169.0.16 localhost minikube multinode-875000-m02]
	I0717 10:54:50.772544    4493 provision.go:177] copyRemoteCerts
	I0717 10:54:50.772596    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:54:50.772612    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:50.772743    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:50.772842    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.772925    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:50.773001    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/id_rsa Username:docker}
	I0717 10:54:50.805428    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:54:50.805497    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:54:50.825423    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:54:50.825506    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 10:54:50.844675    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:54:50.844753    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:54:50.863692    4493 provision.go:87] duration metric: took 196.298177ms to configureAuth
	I0717 10:54:50.863710    4493 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:54:50.863892    4493 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:54:50.863923    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	I0717 10:54:50.864047    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:50.864143    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:50.864236    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.864315    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.864395    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:50.864501    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:54:50.864627    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0717 10:54:50.864635    4493 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:54:50.915603    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:54:50.915614    4493 buildroot.go:70] root file system type: tmpfs
	I0717 10:54:50.915694    4493 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:54:50.915704    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:50.915827    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:50.915913    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.915995    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.916077    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:50.916206    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:54:50.916351    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0717 10:54:50.916397    4493 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.15"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:54:50.976652    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.15
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:54:50.976670    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:50.976806    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:50.976915    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.977036    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:50.977129    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:50.977262    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:54:50.977409    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0717 10:54:50.977423    4493 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:54:52.540317    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:54:52.540331    4493 machine.go:97] duration metric: took 37.054917264s to provisionDockerMachine
	I0717 10:54:52.540340    4493 start.go:293] postStartSetup for "multinode-875000-m02" (driver="hyperkit")
	I0717 10:54:52.540349    4493 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:54:52.540359    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	I0717 10:54:52.540544    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:54:52.540556    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:52.540638    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:52.540730    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:52.540832    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:52.540909    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/id_rsa Username:docker}
	I0717 10:54:52.572875    4493 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:54:52.575767    4493 command_runner.go:130] > NAME=Buildroot
	I0717 10:54:52.575777    4493 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0717 10:54:52.575781    4493 command_runner.go:130] > ID=buildroot
	I0717 10:54:52.575784    4493 command_runner.go:130] > VERSION_ID=2023.02.9
	I0717 10:54:52.575788    4493 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0717 10:54:52.575851    4493 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:54:52.575861    4493 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:54:52.575959    4493 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:54:52.576150    4493 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:54:52.576156    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:54:52.576307    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:54:52.584281    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:54:52.603233    4493 start.go:296] duration metric: took 62.881414ms for postStartSetup
	I0717 10:54:52.603253    4493 fix.go:56] duration metric: took 37.225684715s for fixHost
	I0717 10:54:52.603269    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:52.603398    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:52.603486    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:52.603575    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:52.603658    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:52.603779    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:54:52.603916    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0717 10:54:52.603923    4493 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:54:52.654022    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721238892.728031416
	
	I0717 10:54:52.654033    4493 fix.go:216] guest clock: 1721238892.728031416
	I0717 10:54:52.654038    4493 fix.go:229] Guest: 2024-07-17 10:54:52.728031416 -0700 PDT Remote: 2024-07-17 10:54:52.603259 -0700 PDT m=+104.196631818 (delta=124.772416ms)
	I0717 10:54:52.654052    4493 fix.go:200] guest clock delta is within tolerance: 124.772416ms
	I0717 10:54:52.654056    4493 start.go:83] releasing machines lock for "multinode-875000-m02", held for 37.276502003s
	I0717 10:54:52.654073    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	I0717 10:54:52.654220    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetIP
	I0717 10:54:52.677512    4493 out.go:177] * Found network options:
	I0717 10:54:52.719533    4493 out.go:177]   - NO_PROXY=192.169.0.15
	W0717 10:54:52.740505    4493 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:54:52.740530    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	I0717 10:54:52.740996    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	I0717 10:54:52.741124    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	I0717 10:54:52.741208    4493 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:54:52.741230    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	W0717 10:54:52.741259    4493 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:54:52.741332    4493 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:54:52.741345    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:54:52.741356    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:52.741448    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:52.741475    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:54:52.741545    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:52.741585    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:54:52.741632    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/id_rsa Username:docker}
	I0717 10:54:52.741667    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:54:52.741763    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/id_rsa Username:docker}
	I0717 10:54:52.770803    4493 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 10:54:52.770858    4493 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:54:52.770915    4493 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:54:52.818078    4493 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 10:54:52.818448    4493 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0717 10:54:52.818466    4493 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:54:52.818473    4493 start.go:495] detecting cgroup driver to use...
	I0717 10:54:52.818539    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:54:52.833779    4493 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0717 10:54:52.834101    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:54:52.843741    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:54:52.852983    4493 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:54:52.853036    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:54:52.862047    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:54:52.871203    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:54:52.880044    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:54:52.889152    4493 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:54:52.898259    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:54:52.906974    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:54:52.915724    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:54:52.924512    4493 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:54:52.932801    4493 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 10:54:52.932864    4493 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:54:52.941242    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:54:53.038142    4493 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:54:53.056643    4493 start.go:495] detecting cgroup driver to use...
	I0717 10:54:53.056711    4493 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:54:53.073759    4493 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0717 10:54:53.075346    4493 command_runner.go:130] > [Unit]
	I0717 10:54:53.075355    4493 command_runner.go:130] > Description=Docker Application Container Engine
	I0717 10:54:53.075360    4493 command_runner.go:130] > Documentation=https://docs.docker.com
	I0717 10:54:53.075369    4493 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0717 10:54:53.075375    4493 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0717 10:54:53.075379    4493 command_runner.go:130] > StartLimitBurst=3
	I0717 10:54:53.075383    4493 command_runner.go:130] > StartLimitIntervalSec=60
	I0717 10:54:53.075390    4493 command_runner.go:130] > [Service]
	I0717 10:54:53.075393    4493 command_runner.go:130] > Type=notify
	I0717 10:54:53.075397    4493 command_runner.go:130] > Restart=on-failure
	I0717 10:54:53.075401    4493 command_runner.go:130] > Environment=NO_PROXY=192.169.0.15
	I0717 10:54:53.075407    4493 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0717 10:54:53.075415    4493 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0717 10:54:53.075421    4493 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0717 10:54:53.075427    4493 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0717 10:54:53.075433    4493 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0717 10:54:53.075438    4493 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0717 10:54:53.075444    4493 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0717 10:54:53.075457    4493 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0717 10:54:53.075463    4493 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0717 10:54:53.075467    4493 command_runner.go:130] > ExecStart=
	I0717 10:54:53.075478    4493 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0717 10:54:53.075484    4493 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0717 10:54:53.075490    4493 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0717 10:54:53.075495    4493 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0717 10:54:53.075499    4493 command_runner.go:130] > LimitNOFILE=infinity
	I0717 10:54:53.075503    4493 command_runner.go:130] > LimitNPROC=infinity
	I0717 10:54:53.075512    4493 command_runner.go:130] > LimitCORE=infinity
	I0717 10:54:53.075517    4493 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0717 10:54:53.075521    4493 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0717 10:54:53.075525    4493 command_runner.go:130] > TasksMax=infinity
	I0717 10:54:53.075529    4493 command_runner.go:130] > TimeoutStartSec=0
	I0717 10:54:53.075534    4493 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0717 10:54:53.075538    4493 command_runner.go:130] > Delegate=yes
	I0717 10:54:53.075542    4493 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0717 10:54:53.075550    4493 command_runner.go:130] > KillMode=process
	I0717 10:54:53.075555    4493 command_runner.go:130] > [Install]
	I0717 10:54:53.075559    4493 command_runner.go:130] > WantedBy=multi-user.target
	I0717 10:54:53.075672    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:54:53.087097    4493 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:54:53.104469    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:54:53.115846    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:54:53.126912    4493 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:54:53.147236    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:54:53.158538    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:54:53.173393    4493 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0717 10:54:53.173643    4493 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:54:53.176317    4493 command_runner.go:130] > /usr/bin/cri-dockerd
	I0717 10:54:53.176498    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:54:53.184492    4493 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:54:53.197780    4493 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:54:53.296195    4493 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:54:53.414543    4493 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:54:53.414564    4493 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:54:53.428402    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:54:53.522036    4493 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:54:55.814835    4493 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.292717825s)
	I0717 10:54:55.814898    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 10:54:55.825345    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:54:55.835555    4493 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 10:54:55.928559    4493 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 10:54:56.021035    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:54:56.124129    4493 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 10:54:56.137843    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 10:54:56.149036    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:54:56.250690    4493 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 10:54:56.306186    4493 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 10:54:56.306260    4493 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 10:54:56.312094    4493 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0717 10:54:56.312108    4493 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 10:54:56.312113    4493 command_runner.go:130] > Device: 0,22	Inode: 774         Links: 1
	I0717 10:54:56.312119    4493 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0717 10:54:56.312123    4493 command_runner.go:130] > Access: 2024-07-17 17:54:56.337734451 +0000
	I0717 10:54:56.312133    4493 command_runner.go:130] > Modify: 2024-07-17 17:54:56.337734451 +0000
	I0717 10:54:56.312138    4493 command_runner.go:130] > Change: 2024-07-17 17:54:56.339734451 +0000
	I0717 10:54:56.312141    4493 command_runner.go:130] >  Birth: -
	I0717 10:54:56.312293    4493 start.go:563] Will wait 60s for crictl version
	I0717 10:54:56.312346    4493 ssh_runner.go:195] Run: which crictl
	I0717 10:54:56.315353    4493 command_runner.go:130] > /usr/bin/crictl
	I0717 10:54:56.315462    4493 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 10:54:56.343066    4493 command_runner.go:130] > Version:  0.1.0
	I0717 10:54:56.343082    4493 command_runner.go:130] > RuntimeName:  docker
	I0717 10:54:56.343089    4493 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0717 10:54:56.343093    4493 command_runner.go:130] > RuntimeApiVersion:  v1
	I0717 10:54:56.343140    4493 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 10:54:56.343208    4493 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:54:56.360434    4493 command_runner.go:130] > 27.0.3
	I0717 10:54:56.361492    4493 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 10:54:56.378143    4493 command_runner.go:130] > 27.0.3
	I0717 10:54:56.401360    4493 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 10:54:56.421398    4493 out.go:177]   - env NO_PROXY=192.169.0.15
	I0717 10:54:56.442562    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetIP
	I0717 10:54:56.442948    4493 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0717 10:54:56.447840    4493 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:54:56.457210    4493 mustload.go:65] Loading cluster: multinode-875000
	I0717 10:54:56.457386    4493 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:54:56.457622    4493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:54:56.457644    4493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:54:56.466316    4493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53218
	I0717 10:54:56.466820    4493 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:54:56.467157    4493 main.go:141] libmachine: Using API Version  1
	I0717 10:54:56.467169    4493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:54:56.467393    4493 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:54:56.467497    4493 main.go:141] libmachine: (multinode-875000) Calling .GetState
	I0717 10:54:56.467585    4493 main.go:141] libmachine: (multinode-875000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:54:56.467673    4493 main.go:141] libmachine: (multinode-875000) DBG | hyperkit pid from json: 4506
	I0717 10:54:56.468609    4493 host.go:66] Checking if "multinode-875000" exists ...
	I0717 10:54:56.468870    4493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:54:56.468894    4493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:54:56.477263    4493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53220
	I0717 10:54:56.477587    4493 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:54:56.477971    4493 main.go:141] libmachine: Using API Version  1
	I0717 10:54:56.477987    4493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:54:56.478209    4493 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:54:56.478326    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:54:56.478414    4493 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000 for IP: 192.169.0.16
	I0717 10:54:56.478420    4493 certs.go:194] generating shared ca certs ...
	I0717 10:54:56.478433    4493 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:54:56.478579    4493 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
	I0717 10:54:56.478638    4493 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
	I0717 10:54:56.478648    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 10:54:56.478673    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 10:54:56.478692    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 10:54:56.478710    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 10:54:56.478796    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
	W0717 10:54:56.478835    4493 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
	I0717 10:54:56.478845    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 10:54:56.478883    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
	I0717 10:54:56.478919    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
	I0717 10:54:56.478951    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
	I0717 10:54:56.479022    4493 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:54:56.479056    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
	I0717 10:54:56.479078    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:54:56.479096    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
	I0717 10:54:56.479119    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 10:54:56.499479    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 10:54:56.520218    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 10:54:56.539892    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 10:54:56.561151    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
	I0717 10:54:56.580936    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 10:54:56.600650    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
	I0717 10:54:56.620542    4493 ssh_runner.go:195] Run: openssl version
	I0717 10:54:56.624552    4493 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0717 10:54:56.624757    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
	I0717 10:54:56.632959    4493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
	I0717 10:54:56.636184    4493 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:54:56.636381    4493 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
	I0717 10:54:56.636425    4493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
	I0717 10:54:56.640335    4493 command_runner.go:130] > 3ec20f2e
	I0717 10:54:56.640548    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 10:54:56.648681    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 10:54:56.656838    4493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:54:56.660147    4493 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:54:56.660234    4493 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:54:56.660270    4493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 10:54:56.664305    4493 command_runner.go:130] > b5213941
	I0717 10:54:56.664449    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 10:54:56.672888    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
	I0717 10:54:56.681217    4493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
	I0717 10:54:56.684479    4493 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:54:56.684605    4493 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
	I0717 10:54:56.684642    4493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
	I0717 10:54:56.688668    4493 command_runner.go:130] > 51391683
	I0717 10:54:56.688809    4493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
	I0717 10:54:56.697420    4493 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 10:54:56.700396    4493 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 10:54:56.700532    4493 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 10:54:56.700566    4493 kubeadm.go:934] updating node {m02 192.169.0.16 8443 v1.30.2 docker false true} ...
	I0717 10:54:56.700619    4493 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-875000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 10:54:56.700661    4493 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 10:54:56.707687    4493 command_runner.go:130] > kubeadm
	I0717 10:54:56.707698    4493 command_runner.go:130] > kubectl
	I0717 10:54:56.707701    4493 command_runner.go:130] > kubelet
	I0717 10:54:56.707712    4493 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 10:54:56.707752    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0717 10:54:56.714963    4493 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0717 10:54:56.728389    4493 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 10:54:56.741770    4493 ssh_runner.go:195] Run: grep 192.169.0.15	control-plane.minikube.internal$ /etc/hosts
	I0717 10:54:56.744668    4493 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 10:54:56.754021    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:54:56.845666    4493 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:54:56.860725    4493 host.go:66] Checking if "multinode-875000" exists ...
	I0717 10:54:56.861012    4493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:54:56.861037    4493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:54:56.869837    4493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53222
	I0717 10:54:56.870195    4493 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:54:56.870563    4493 main.go:141] libmachine: Using API Version  1
	I0717 10:54:56.870576    4493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:54:56.870787    4493 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:54:56.870902    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:54:56.871001    4493 start.go:317] joinCluster: &{Name:multinode-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.15 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.17 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:54:56.871094    4493 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0717 10:54:56.871120    4493 host.go:66] Checking if "multinode-875000-m02" exists ...
	I0717 10:54:56.871394    4493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:54:56.871421    4493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:54:56.880400    4493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53224
	I0717 10:54:56.880751    4493 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:54:56.881110    4493 main.go:141] libmachine: Using API Version  1
	I0717 10:54:56.881127    4493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:54:56.881441    4493 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:54:56.881593    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	I0717 10:54:56.881682    4493 mustload.go:65] Loading cluster: multinode-875000
	I0717 10:54:56.881867    4493 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:54:56.882088    4493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:54:56.882112    4493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:54:56.890925    4493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53226
	I0717 10:54:56.891286    4493 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:54:56.891611    4493 main.go:141] libmachine: Using API Version  1
	I0717 10:54:56.891627    4493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:54:56.891830    4493 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:54:56.891949    4493 main.go:141] libmachine: (multinode-875000) Calling .GetState
	I0717 10:54:56.892027    4493 main.go:141] libmachine: (multinode-875000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:54:56.892105    4493 main.go:141] libmachine: (multinode-875000) DBG | hyperkit pid from json: 4506
	I0717 10:54:56.893081    4493 host.go:66] Checking if "multinode-875000" exists ...
	I0717 10:54:56.893356    4493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:54:56.893379    4493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:54:56.902259    4493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53228
	I0717 10:54:56.902618    4493 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:54:56.902942    4493 main.go:141] libmachine: Using API Version  1
	I0717 10:54:56.902953    4493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:54:56.903151    4493 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:54:56.903258    4493 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:54:56.903348    4493 api_server.go:166] Checking apiserver status ...
	I0717 10:54:56.903400    4493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:54:56.903410    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:54:56.903490    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:54:56.903570    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:54:56.903659    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:54:56.903737    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/id_rsa Username:docker}
	I0717 10:54:56.942665    4493 command_runner.go:130] > 1676
	I0717 10:54:56.942766    4493 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1676/cgroup
	W0717 10:54:56.951065    4493 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1676/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 10:54:56.951135    4493 ssh_runner.go:195] Run: ls
	I0717 10:54:56.954491    4493 api_server.go:253] Checking apiserver healthz at https://192.169.0.15:8443/healthz ...
	I0717 10:54:56.958186    4493 api_server.go:279] https://192.169.0.15:8443/healthz returned 200:
	ok
	I0717 10:54:56.958243    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl drain multinode-875000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0717 10:54:57.041498    4493 command_runner.go:130] > node/multinode-875000-m02 cordoned
	I0717 10:55:00.062461    4493 command_runner.go:130] > pod "busybox-fc5497c4f-sp4jf" has DeletionTimestamp older than 1 seconds, skipping
	I0717 10:55:00.062482    4493 command_runner.go:130] > node/multinode-875000-m02 drained
	I0717 10:55:00.064322    4493 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-pj9kh, kube-system/kube-proxy-tp2zz
	I0717 10:55:00.064403    4493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl drain multinode-875000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.10605879s)
	I0717 10:55:00.064418    4493 node.go:128] successfully drained node "multinode-875000-m02"
	I0717 10:55:00.064443    4493 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0717 10:55:00.064478    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:55:00.064611    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:55:00.064706    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:55:00.064802    4493 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:55:00.064885    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/id_rsa Username:docker}
	I0717 10:55:00.146148    4493 command_runner.go:130] > [preflight] Running pre-flight checks
	I0717 10:55:00.146319    4493 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0717 10:55:00.146328    4493 command_runner.go:130] > [reset] Stopping the kubelet service
	I0717 10:55:00.153055    4493 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0717 10:55:00.362034    4493 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0717 10:55:00.363625    4493 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0717 10:55:00.363636    4493 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0717 10:55:00.363645    4493 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0717 10:55:00.363652    4493 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0717 10:55:00.363658    4493 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0717 10:55:00.363662    4493 command_runner.go:130] > to reset your system's IPVS tables.
	I0717 10:55:00.363667    4493 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0717 10:55:00.363678    4493 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0717 10:55:00.364432    4493 command_runner.go:130] ! W0717 17:55:00.225625    1261 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0717 10:55:00.364457    4493 command_runner.go:130] ! W0717 17:55:00.441414    1261 cleanupnode.go:106] [reset] Failed to remove containers: failed to stop running pod 0feb0b072ced7ae109f1a463a2def851272dd796646878c46af448aa0c69e0be: output: E0717 17:55:00.346667    1290 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-sp4jf_default\" network: cni config uninitialized" podSandboxID="0feb0b072ced7ae109f1a463a2def851272dd796646878c46af448aa0c69e0be"
	I0717 10:55:00.364470    4493 command_runner.go:130] ! time="2024-07-17T17:55:00Z" level=fatal msg="stopping the pod sandbox \"0feb0b072ced7ae109f1a463a2def851272dd796646878c46af448aa0c69e0be\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-sp4jf_default\" network: cni config uninitialized"
	I0717 10:55:00.364479    4493 command_runner.go:130] ! : exit status 1
	I0717 10:55:00.364491    4493 node.go:155] successfully reset node "multinode-875000-m02"
	I0717 10:55:00.364766    4493 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:55:00.365012    4493 kapi.go:59] client config for multinode-875000: &rest.Config{Host:"https://192.169.0.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xeec6b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 10:55:00.365282    4493 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0717 10:55:00.365311    4493 round_trippers.go:463] DELETE https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:00.365315    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:00.365322    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:00.365325    4493 round_trippers.go:473]     Content-Type: application/json
	I0717 10:55:00.365329    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:00.367975    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:00.367985    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:00.367991    4493 round_trippers.go:580]     Content-Length: 171
	I0717 10:55:00.367994    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:00 GMT
	I0717 10:55:00.368006    4493 round_trippers.go:580]     Audit-Id: 7409137d-ed16-4812-8938-99c2d2747fe9
	I0717 10:55:00.368012    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:00.368014    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:00.368017    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:00.368020    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:00.368030    4493 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-875000-m02","kind":"nodes","uid":"e92886e5-127c-42d8-b0f7-76db7895a433"}}
	I0717 10:55:00.368058    4493 node.go:180] successfully deleted node "multinode-875000-m02"
	I0717 10:55:00.368066    4493 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0717 10:55:00.368088    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 10:55:00.368103    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:55:00.368251    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:55:00.368350    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:55:00.368443    4493 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:55:00.368537    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/id_rsa Username:docker}
	I0717 10:55:00.451521    4493 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 5xpi7v.8qt9i595u32wzn59 --discovery-token-ca-cert-hash sha256:6ede73121e365fd80e9329df76f11084b0ca9769c5610fa08d82ec64ba1ac24d 
	I0717 10:55:00.453483    4493 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0717 10:55:00.453501    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5xpi7v.8qt9i595u32wzn59 --discovery-token-ca-cert-hash sha256:6ede73121e365fd80e9329df76f11084b0ca9769c5610fa08d82ec64ba1ac24d --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-875000-m02"
	I0717 10:55:00.488054    4493 command_runner.go:130] > [preflight] Running pre-flight checks
	I0717 10:55:00.586862    4493 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0717 10:55:00.586883    4493 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0717 10:55:00.619472    4493 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 10:55:00.619554    4493 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 10:55:00.619636    4493 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0717 10:55:00.724010    4493 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 10:55:01.224638    4493 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.102674ms
	I0717 10:55:01.224657    4493 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0717 10:55:01.236511    4493 command_runner.go:130] > This node has joined the cluster:
	I0717 10:55:01.236525    4493 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0717 10:55:01.236530    4493 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0717 10:55:01.236536    4493 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0717 10:55:01.238002    4493 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 10:55:01.238169    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 10:55:01.342452    4493 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0717 10:55:01.442891    4493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-875000-m02 minikube.k8s.io/updated_at=2024_07_17T10_55_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=multinode-875000 minikube.k8s.io/primary=false
	I0717 10:55:01.510313    4493 command_runner.go:130] > node/multinode-875000-m02 labeled
	I0717 10:55:01.510343    4493 start.go:319] duration metric: took 4.639218919s to joinCluster
	I0717 10:55:01.510392    4493 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0717 10:55:01.510566    4493 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:01.533251    4493 out.go:177] * Verifying Kubernetes components...
	I0717 10:55:01.592481    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:55:01.687439    4493 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 10:55:01.699553    4493 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:55:01.699737    4493 kapi.go:59] client config for multinode-875000: &rest.Config{Host:"https://192.169.0.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xeec6b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 10:55:01.699921    4493 node_ready.go:35] waiting up to 6m0s for node "multinode-875000-m02" to be "Ready" ...
	I0717 10:55:01.699961    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:01.699966    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:01.699972    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:01.699975    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:01.701540    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:01.701553    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:01.701562    4493 round_trippers.go:580]     Audit-Id: d610837f-a903-4595-b821-1ecb3d160396
	I0717 10:55:01.701571    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:01.701594    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:01.701601    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:01.701605    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:01.701608    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:01 GMT
	I0717 10:55:01.701839    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"980","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3563 chars]
	I0717 10:55:02.200205    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:02.200225    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:02.200236    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:02.200241    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:02.202575    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:02.202587    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:02.202594    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:02.202600    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:02.202604    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:02.202610    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:02 GMT
	I0717 10:55:02.202620    4493 round_trippers.go:580]     Audit-Id: ccb73072-2e97-4be1-996d-85722f328eaa
	I0717 10:55:02.202632    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:02.203135    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"980","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3563 chars]
	I0717 10:55:02.700739    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:02.700763    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:02.700774    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:02.700781    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:02.703244    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:02.703259    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:02.703265    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:02.703270    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:02.703274    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:02.703279    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:02.703309    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:02 GMT
	I0717 10:55:02.703320    4493 round_trippers.go:580]     Audit-Id: 05307e3b-9f52-428a-9ee7-31cb89be7343
	I0717 10:55:02.703388    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"980","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3563 chars]
	I0717 10:55:03.200219    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:03.200238    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:03.200244    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:03.200247    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:03.202237    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:03.202251    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:03.202257    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:03 GMT
	I0717 10:55:03.202260    4493 round_trippers.go:580]     Audit-Id: bd11140b-6603-4f0c-b555-8d97e33b2574
	I0717 10:55:03.202264    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:03.202268    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:03.202272    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:03.202276    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:03.202349    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:03.700095    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:03.700113    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:03.700169    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:03.700174    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:03.702595    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:03.702609    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:03.702615    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:03.702617    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:03.702620    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:03.702623    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:03 GMT
	I0717 10:55:03.702625    4493 round_trippers.go:580]     Audit-Id: 53111263-6831-434a-a406-ff2e35a2b89f
	I0717 10:55:03.702628    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:03.702725    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:03.702909    4493 node_ready.go:53] node "multinode-875000-m02" has status "Ready":"False"
	I0717 10:55:04.200135    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:04.200151    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:04.200158    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:04.200162    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:04.201606    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:04.201615    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:04.201620    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:04.201625    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:04.201628    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:04 GMT
	I0717 10:55:04.201631    4493 round_trippers.go:580]     Audit-Id: 8461e4b0-4d8b-4981-ac49-4c0f962bf063
	I0717 10:55:04.201635    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:04.201637    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:04.201723    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:04.700291    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:04.700313    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:04.700324    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:04.700330    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:04.702569    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:04.702584    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:04.702591    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:04.702598    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:04.702602    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:04.702605    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:04 GMT
	I0717 10:55:04.702609    4493 round_trippers.go:580]     Audit-Id: 966e4f03-6e8b-4a3f-9dce-940f1d802dfd
	I0717 10:55:04.702613    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:04.702687    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:05.200269    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:05.200382    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:05.200398    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:05.200421    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:05.203290    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:05.203305    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:05.203313    4493 round_trippers.go:580]     Audit-Id: fb046001-3559-4356-b9ce-d7024ab60ed1
	I0717 10:55:05.203317    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:05.203320    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:05.203324    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:05.203329    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:05.203332    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:05 GMT
	I0717 10:55:05.203412    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:05.700402    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:05.700506    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:05.700522    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:05.700533    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:05.702879    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:05.702892    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:05.702899    4493 round_trippers.go:580]     Audit-Id: c7ed85b0-8fbe-4717-8f42-5e4801ed70d8
	I0717 10:55:05.702925    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:05.702932    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:05.702937    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:05.702942    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:05.702947    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:05 GMT
	I0717 10:55:05.703176    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:05.703400    4493 node_ready.go:53] node "multinode-875000-m02" has status "Ready":"False"
	I0717 10:55:06.200532    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:06.200547    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:06.200556    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:06.200559    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:06.202322    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:06.202335    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:06.202341    4493 round_trippers.go:580]     Audit-Id: c4b5945c-d4ba-468e-aa74-89f74ef67368
	I0717 10:55:06.202344    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:06.202349    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:06.202352    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:06.202356    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:06.202368    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:06 GMT
	I0717 10:55:06.202610    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:06.700376    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:06.700393    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:06.700404    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:06.700410    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:06.703109    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:06.703121    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:06.703128    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:06.703133    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:06.703137    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:06 GMT
	I0717 10:55:06.703141    4493 round_trippers.go:580]     Audit-Id: d9117cca-976b-42dd-bf76-ea2d62050fb5
	I0717 10:55:06.703146    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:06.703149    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:06.703625    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:07.200703    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:07.200725    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:07.200736    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:07.200743    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:07.203081    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:07.203094    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:07.203101    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:07.203106    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:07 GMT
	I0717 10:55:07.203110    4493 round_trippers.go:580]     Audit-Id: 6d1174d1-8eb9-47c5-894e-5597178454de
	I0717 10:55:07.203122    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:07.203127    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:07.203133    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:07.203410    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:07.702052    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:07.702122    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:07.702135    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:07.702142    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:07.704618    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:07.704630    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:07.704638    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:07.704646    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:07.704649    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:07.704653    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:07.704657    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:07 GMT
	I0717 10:55:07.704661    4493 round_trippers.go:580]     Audit-Id: 520f58b3-f649-4814-a04e-8e8d393a90be
	I0717 10:55:07.704718    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:07.704939    4493 node_ready.go:53] node "multinode-875000-m02" has status "Ready":"False"
	I0717 10:55:08.201539    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:08.201560    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:08.201572    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:08.201577    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:08.204001    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:08.204016    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:08.204023    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:08.204028    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:08.204056    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:08.204063    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:08 GMT
	I0717 10:55:08.204068    4493 round_trippers.go:580]     Audit-Id: 01e4b9d6-8768-44a4-bdb9-614da75a5859
	I0717 10:55:08.204071    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:08.204144    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:08.700307    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:08.700322    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:08.700330    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:08.700336    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:08.702138    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:08.702150    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:08.702155    4493 round_trippers.go:580]     Audit-Id: 6b587bbd-1dd4-42b4-9106-21df5494a268
	I0717 10:55:08.702159    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:08.702161    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:08.702164    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:08.702167    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:08.702169    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:08 GMT
	I0717 10:55:08.702275    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:09.200553    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:09.200583    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:09.200595    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:09.200601    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:09.203493    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:09.203513    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:09.203523    4493 round_trippers.go:580]     Audit-Id: bbe0d358-4712-4872-802a-e6a8cee28ec6
	I0717 10:55:09.203530    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:09.203536    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:09.203541    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:09.203548    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:09.203553    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:09 GMT
	I0717 10:55:09.203687    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:09.701289    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:09.701312    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:09.701323    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:09.701329    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:09.704307    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:09.704328    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:09.704336    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:09.704341    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:09.704344    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:09.704364    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:09.704375    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:09 GMT
	I0717 10:55:09.704382    4493 round_trippers.go:580]     Audit-Id: 43108773-3b2c-43b9-a2fc-a67e253c4276
	I0717 10:55:09.704722    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:10.201373    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:10.201396    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:10.201408    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:10.201416    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:10.203944    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:10.203960    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:10.203970    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:10.203975    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:10.203979    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:10.203982    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:10 GMT
	I0717 10:55:10.203985    4493 round_trippers.go:580]     Audit-Id: 8334ea88-80fe-481f-bfc2-3bdf8e5007a0
	I0717 10:55:10.203988    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:10.204137    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:10.204365    4493 node_ready.go:53] node "multinode-875000-m02" has status "Ready":"False"
	I0717 10:55:10.701655    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:10.701678    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:10.701689    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:10.701696    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:10.704152    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:10.704169    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:10.704177    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:10.704181    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:10.704193    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:10 GMT
	I0717 10:55:10.704197    4493 round_trippers.go:580]     Audit-Id: 98f9d28b-0a42-4910-8099-c1c2a1178293
	I0717 10:55:10.704200    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:10.704204    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:10.704466    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:11.200326    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:11.200339    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:11.200345    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:11.200349    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:11.202093    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:11.202104    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:11.202109    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:11.202113    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:11 GMT
	I0717 10:55:11.202118    4493 round_trippers.go:580]     Audit-Id: cb492284-7d75-4f08-8d35-0a3336ca07bf
	I0717 10:55:11.202121    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:11.202125    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:11.202129    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:11.202242    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"982","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3672 chars]
	I0717 10:55:11.700801    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:11.700828    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:11.700841    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:11.700846    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:11.703521    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:11.703536    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:11.703557    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:11.703567    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:11.703574    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:11.703582    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:11.703585    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:11 GMT
	I0717 10:55:11.703590    4493 round_trippers.go:580]     Audit-Id: b34c9af3-3583-4eff-8430-2584ab881f5f
	I0717 10:55:11.703833    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1011","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0717 10:55:12.200439    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:12.200460    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:12.200472    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:12.200477    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:12.202777    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:12.202790    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:12.202797    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:12.202811    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:12.202818    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:12.202822    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:12.202842    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:12 GMT
	I0717 10:55:12.202851    4493 round_trippers.go:580]     Audit-Id: 8c615bbd-dc66-4929-b8b1-81662a1a74a9
	I0717 10:55:12.202931    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1011","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0717 10:55:12.701497    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:12.701513    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:12.701522    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:12.701526    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:12.703441    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:12.703459    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:12.703469    4493 round_trippers.go:580]     Audit-Id: fe37677f-8d42-4dcd-a383-1973ea7c9482
	I0717 10:55:12.703478    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:12.703483    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:12.703488    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:12.703495    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:12.703506    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:12 GMT
	I0717 10:55:12.703677    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1011","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0717 10:55:12.703854    4493 node_ready.go:53] node "multinode-875000-m02" has status "Ready":"False"
	I0717 10:55:13.200995    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:13.201016    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:13.201029    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:13.201034    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:13.203629    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:13.203645    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:13.203652    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:13.203658    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:13 GMT
	I0717 10:55:13.203663    4493 round_trippers.go:580]     Audit-Id: b6627e79-75cd-47c9-b51d-de40f1b5842a
	I0717 10:55:13.203666    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:13.203670    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:13.203673    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:13.203772    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1011","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0717 10:55:13.700366    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:13.700380    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:13.700386    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:13.700390    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:13.702030    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:13.702042    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:13.702049    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:13.702052    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:13.702056    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:13.702060    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:13.702065    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:13 GMT
	I0717 10:55:13.702068    4493 round_trippers.go:580]     Audit-Id: 96cc2dbe-d717-407c-8f23-63cfc496bdf3
	I0717 10:55:13.702275    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1011","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0717 10:55:14.201033    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:14.201055    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:14.201066    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:14.201073    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:14.203567    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:14.203579    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:14.203586    4493 round_trippers.go:580]     Audit-Id: 34ad8a68-a477-459a-bd54-aff77356161d
	I0717 10:55:14.203590    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:14.203598    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:14.203602    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:14.203605    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:14.203609    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:14 GMT
	I0717 10:55:14.203837    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1011","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0717 10:55:14.700783    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:14.700809    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:14.700854    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:14.700860    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:14.703359    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:14.703377    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:14.703386    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:14.703395    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:14.703403    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:14 GMT
	I0717 10:55:14.703407    4493 round_trippers.go:580]     Audit-Id: 2d4c051d-a143-4bc8-ab8a-d84f2bf13089
	I0717 10:55:14.703412    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:14.703431    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:14.703552    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1011","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0717 10:55:15.200537    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:15.200560    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:15.200569    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:15.200577    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:15.202608    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:15.202621    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:15.202628    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:15.202632    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:15 GMT
	I0717 10:55:15.202636    4493 round_trippers.go:580]     Audit-Id: c12a4e94-3a66-4478-a6dc-bd28300ef803
	I0717 10:55:15.202644    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:15.202648    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:15.202651    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:15.202819    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1011","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0717 10:55:15.203051    4493 node_ready.go:53] node "multinode-875000-m02" has status "Ready":"False"
	I0717 10:55:15.701393    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:15.701415    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:15.701427    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:15.701433    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:15.704339    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:15.704356    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:15.704363    4493 round_trippers.go:580]     Audit-Id: d52299e1-1abf-452d-afc0-2b3a8c6d1231
	I0717 10:55:15.704367    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:15.704372    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:15.704377    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:15.704380    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:15.704383    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:15 GMT
	I0717 10:55:15.704452    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1011","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0717 10:55:16.201611    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:16.201635    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.201646    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.201652    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.204226    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:16.204241    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.204261    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.204269    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.204276    4493 round_trippers.go:580]     Audit-Id: 47db4f9b-982d-4d64-9a9b-4dd331b514bb
	I0717 10:55:16.204281    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.204286    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.204291    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.204515    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1011","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0717 10:55:16.700597    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:16.700621    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.700701    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.700710    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.726060    4493 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0717 10:55:16.726077    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.726085    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.726090    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.726095    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.726101    4493 round_trippers.go:580]     Audit-Id: 14d733be-9593-4ef6-8fdb-9886cbf78bb5
	I0717 10:55:16.726107    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.726113    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.726316    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1018","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0717 10:55:16.726548    4493 node_ready.go:49] node "multinode-875000-m02" has status "Ready":"True"
	I0717 10:55:16.726559    4493 node_ready.go:38] duration metric: took 15.026225552s for node "multinode-875000-m02" to be "Ready" ...
	I0717 10:55:16.726567    4493 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:55:16.726609    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods
	I0717 10:55:16.726616    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.726624    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.726629    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.729121    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:16.729128    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.729132    4493 round_trippers.go:580]     Audit-Id: 1856a5e9-19d4-4f5b-8032-cb7c4f33d818
	I0717 10:55:16.729137    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.729143    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.729148    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.729153    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.729155    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.730130    4493 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1022"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"895","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 86436 chars]
	I0717 10:55:16.732054    4493 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:16.732094    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nlwxm
	I0717 10:55:16.732098    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.732115    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.732121    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.733261    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:16.733268    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.733273    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.733276    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.733281    4493 round_trippers.go:580]     Audit-Id: a39df43c-8e87-4377-ad48-297b9d5cd4b5
	I0717 10:55:16.733285    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.733288    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.733291    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.733483    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-nlwxm","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d9e6c103-3eba-4549-b327-23c87ce480cd","resourceVersion":"895","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"18766312-080f-4023-9641-43536c3881ab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18766312-080f-4023-9641-43536c3881ab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6783 chars]
	I0717 10:55:16.733722    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:55:16.733728    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.733734    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.733737    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.734680    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:55:16.734687    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.734692    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.734695    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.734699    4493 round_trippers.go:580]     Audit-Id: d0fc177b-2d7e-402f-9556-3e23d30f3b53
	I0717 10:55:16.734702    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.734705    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.734708    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.734816    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:55:16.734986    4493 pod_ready.go:92] pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace has status "Ready":"True"
	I0717 10:55:16.734993    4493 pod_ready.go:81] duration metric: took 2.929581ms for pod "coredns-7db6d8ff4d-nlwxm" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:16.734999    4493 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:16.735032    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-875000
	I0717 10:55:16.735036    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.735042    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.735046    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.736001    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:55:16.736011    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.736016    4493 round_trippers.go:580]     Audit-Id: 93504772-a15e-4f35-b7f6-1885b347e61f
	I0717 10:55:16.736025    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.736029    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.736032    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.736035    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.736037    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.736120    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-875000","namespace":"kube-system","uid":"b181608e-80a7-4ef3-9702-315fe76bc83b","resourceVersion":"868","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.15:2379","kubernetes.io/config.hash":"be038e8a848fb24f787b8a2643981714","kubernetes.io/config.mirror":"be038e8a848fb24f787b8a2643981714","kubernetes.io/config.seen":"2024-07-17T17:49:49.643438469Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6357 chars]
	I0717 10:55:16.736335    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:55:16.736341    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.736347    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.736352    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.737254    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:55:16.737261    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.737266    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.737270    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.737273    4493 round_trippers.go:580]     Audit-Id: 55641bbe-3856-4eed-8881-fbee0685c13b
	I0717 10:55:16.737276    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.737279    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.737281    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.737424    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:55:16.737593    4493 pod_ready.go:92] pod "etcd-multinode-875000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:55:16.737603    4493 pod_ready.go:81] duration metric: took 2.596542ms for pod "etcd-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:16.737613    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:16.737642    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-875000
	I0717 10:55:16.737647    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.737652    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.737656    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.738744    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:16.738749    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.738753    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.738760    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.738765    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.738770    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.738773    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.738775    4493 round_trippers.go:580]     Audit-Id: cc74545b-3ce4-4efd-b2c5-34a7e572d2e1
	I0717 10:55:16.738947    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-875000","namespace":"kube-system","uid":"994530a7-11e7-4b05-95ec-c77751a6c24d","resourceVersion":"872","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.15:8443","kubernetes.io/config.hash":"61312bb99a3953bb6d2fa540cea05ce5","kubernetes.io/config.mirror":"61312bb99a3953bb6d2fa540cea05ce5","kubernetes.io/config.seen":"2024-07-17T17:49:49.643441506Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0717 10:55:16.739187    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:55:16.739194    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.739199    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.739204    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.740519    4493 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 10:55:16.740534    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.740543    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.740560    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.740566    4493 round_trippers.go:580]     Audit-Id: 652d62d1-ce80-4f6d-9576-03a17e0b8937
	I0717 10:55:16.740571    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.740574    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.740583    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.740761    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:55:16.740940    4493 pod_ready.go:92] pod "kube-apiserver-multinode-875000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:55:16.740947    4493 pod_ready.go:81] duration metric: took 3.329334ms for pod "kube-apiserver-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:16.740954    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:16.740988    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-875000
	I0717 10:55:16.740993    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.740998    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.741002    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.741960    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:55:16.741968    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.741972    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.741982    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.741987    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.741991    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.741995    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.741998    4493 round_trippers.go:580]     Audit-Id: 7b3c7d51-ad80-40fb-9acb-44ca7ff96048
	I0717 10:55:16.742169    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-875000","namespace":"kube-system","uid":"10a5876c-ddf6-4f37-82ca-96ea7ebde028","resourceVersion":"875","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2fd022622c52536f8b0b923ecacc7ea2","kubernetes.io/config.mirror":"2fd022622c52536f8b0b923ecacc7ea2","kubernetes.io/config.seen":"2024-07-17T17:49:49.643442180Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0717 10:55:16.742397    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:55:16.742404    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.742409    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.742413    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.743376    4493 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 10:55:16.743387    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.743393    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.743398    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.743401    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:16 GMT
	I0717 10:55:16.743406    4493 round_trippers.go:580]     Audit-Id: 6179a71e-5ca2-4260-a1bd-55b324d233c6
	I0717 10:55:16.743409    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.743412    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.743494    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:55:16.743716    4493 pod_ready.go:92] pod "kube-controller-manager-multinode-875000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:55:16.743726    4493 pod_ready.go:81] duration metric: took 2.766569ms for pod "kube-controller-manager-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:16.743740    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dnn4j" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:16.901292    4493 request.go:629] Waited for 157.497354ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dnn4j
	I0717 10:55:16.901464    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dnn4j
	I0717 10:55:16.901474    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:16.901493    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:16.901499    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:16.903974    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:16.903986    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:16.903995    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:17 GMT
	I0717 10:55:16.904003    4493 round_trippers.go:580]     Audit-Id: 4803e979-7eae-4122-8947-58ccbc9c8733
	I0717 10:55:16.904009    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:16.904015    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:16.904019    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:16.904027    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:16.904324    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dnn4j","generateName":"kube-proxy-","namespace":"kube-system","uid":"fd7faf4d-f212-4c89-9ac5-8e408c295411","resourceVersion":"930","creationTimestamp":"2024-07-17T17:51:33Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8a69f9ff-027d-455b-b65a-6c9aef9936e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:51:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a69f9ff-027d-455b-b65a-6c9aef9936e7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6056 chars]
	I0717 10:55:17.102759    4493 request.go:629] Waited for 198.049639ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m03
	I0717 10:55:17.102859    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m03
	I0717 10:55:17.102870    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:17.102882    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:17.102889    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:17.105906    4493 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:55:17.105923    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:17.105931    4493 round_trippers.go:580]     Audit-Id: 76d0cd12-471c-4e18-86a3-adac6efe39d4
	I0717 10:55:17.105935    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:17.105938    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:17.105941    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:17.105944    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:17.105950    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:17 GMT
	I0717 10:55:17.106403    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m03","uid":"4dcfd269-94b0-4652-bd6d-b7d938fc2b6d","resourceVersion":"941","creationTimestamp":"2024-07-17T17:52:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_52_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:52:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 4397 chars]
	I0717 10:55:17.106648    4493 pod_ready.go:97] node "multinode-875000-m03" hosting pod "kube-proxy-dnn4j" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000-m03" has status "Ready":"Unknown"
	I0717 10:55:17.106662    4493 pod_ready.go:81] duration metric: took 362.907614ms for pod "kube-proxy-dnn4j" in "kube-system" namespace to be "Ready" ...
	E0717 10:55:17.106694    4493 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-875000-m03" hosting pod "kube-proxy-dnn4j" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-875000-m03" has status "Ready":"Unknown"
	I0717 10:55:17.106709    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tp2zz" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:17.301730    4493 request.go:629] Waited for 194.949264ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp2zz
	I0717 10:55:17.301801    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp2zz
	I0717 10:55:17.301817    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:17.301828    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:17.301835    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:17.304783    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:17.304798    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:17.304805    4493 round_trippers.go:580]     Audit-Id: 943cc64b-404f-4f7f-937f-11ed72b7e6ec
	I0717 10:55:17.304809    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:17.304821    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:17.304827    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:17.304831    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:17.304834    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:17 GMT
	I0717 10:55:17.304949    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tp2zz","generateName":"kube-proxy-","namespace":"kube-system","uid":"9fda8ef7-b324-4cbb-a8d9-98f93132b2e7","resourceVersion":"997","creationTimestamp":"2024-07-17T17:50:42Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8a69f9ff-027d-455b-b65a-6c9aef9936e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a69f9ff-027d-455b-b65a-6c9aef9936e7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0717 10:55:17.502523    4493 request.go:629] Waited for 197.149589ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:17.502710    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000-m02
	I0717 10:55:17.502722    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:17.502732    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:17.502741    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:17.505415    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:17.505430    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:17.505438    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:17 GMT
	I0717 10:55:17.505443    4493 round_trippers.go:580]     Audit-Id: 901a17f3-78d9-41a7-ac16-c8c49a561782
	I0717 10:55:17.505448    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:17.505452    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:17.505464    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:17.505471    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:17.505728    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000-m02","uid":"c4aacfe5-1e75-4653-868d-ba776e9aece7","resourceVersion":"1018","creationTimestamp":"2024-07-17T17:55:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_17T10_55_01_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0717 10:55:17.505964    4493 pod_ready.go:92] pod "kube-proxy-tp2zz" in "kube-system" namespace has status "Ready":"True"
	I0717 10:55:17.505975    4493 pod_ready.go:81] duration metric: took 399.244269ms for pod "kube-proxy-tp2zz" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:17.505983    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zs8f8" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:17.700691    4493 request.go:629] Waited for 194.656124ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zs8f8
	I0717 10:55:17.700818    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zs8f8
	I0717 10:55:17.700826    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:17.700837    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:17.700843    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:17.703408    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:17.703421    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:17.703428    4493 round_trippers.go:580]     Audit-Id: 1dc7e35c-f750-44d3-8764-34f022e1e8ef
	I0717 10:55:17.703433    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:17.703449    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:17.703457    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:17.703461    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:17.703468    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:17 GMT
	I0717 10:55:17.703710    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zs8f8","generateName":"kube-proxy-","namespace":"kube-system","uid":"9e2bce56-d9e0-42a1-a265-4aab3577b031","resourceVersion":"774","creationTimestamp":"2024-07-17T17:50:04Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8a69f9ff-027d-455b-b65a-6c9aef9936e7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:50:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a69f9ff-027d-455b-b65a-6c9aef9936e7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0717 10:55:17.902542    4493 request.go:629] Waited for 198.454297ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:55:17.902695    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:55:17.902704    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:17.902716    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:17.902726    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:17.906048    4493 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 10:55:17.906064    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:17.906071    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:17.906074    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:18 GMT
	I0717 10:55:17.906079    4493 round_trippers.go:580]     Audit-Id: f954c10c-4ffe-4bd7-b2c4-32df06ff1c24
	I0717 10:55:17.906082    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:17.906087    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:17.906092    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:17.906213    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:55:17.906477    4493 pod_ready.go:92] pod "kube-proxy-zs8f8" in "kube-system" namespace has status "Ready":"True"
	I0717 10:55:17.906489    4493 pod_ready.go:81] duration metric: took 400.48952ms for pod "kube-proxy-zs8f8" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:17.906497    4493 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:18.100806    4493 request.go:629] Waited for 194.255923ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-875000
	I0717 10:55:18.100944    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-875000
	I0717 10:55:18.100963    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:18.100976    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:18.100983    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:18.103096    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:18.103109    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:18.103116    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:18.103123    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:18.103127    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:18.103131    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:18 GMT
	I0717 10:55:18.103136    4493 round_trippers.go:580]     Audit-Id: 8a4a63de-1847-4f14-a54c-b57984d5fa46
	I0717 10:55:18.103139    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:18.103454    4493 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-875000","namespace":"kube-system","uid":"b2f1c23d-635b-490e-a964-c28e1566ead0","resourceVersion":"877","creationTimestamp":"2024-07-17T17:49:49Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0081164f6599d019e7f075e4c8b7277","kubernetes.io/config.mirror":"e0081164f6599d019e7f075e4c8b7277","kubernetes.io/config.seen":"2024-07-17T17:49:49.643442746Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T17:49:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0717 10:55:18.301075    4493 request.go:629] Waited for 197.262235ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:55:18.301191    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes/multinode-875000
	I0717 10:55:18.301201    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:18.301212    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:18.301218    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:18.304146    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:18.304161    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:18.304171    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:18.304180    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:18.304187    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:18 GMT
	I0717 10:55:18.304192    4493 round_trippers.go:580]     Audit-Id: e40f13c8-894a-468f-8a32-af0fe283917f
	I0717 10:55:18.304197    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:18.304202    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:18.304481    4493 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-17T17:49:47Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0717 10:55:18.304750    4493 pod_ready.go:92] pod "kube-scheduler-multinode-875000" in "kube-system" namespace has status "Ready":"True"
	I0717 10:55:18.304761    4493 pod_ready.go:81] duration metric: took 398.246673ms for pod "kube-scheduler-multinode-875000" in "kube-system" namespace to be "Ready" ...
	I0717 10:55:18.304770    4493 pod_ready.go:38] duration metric: took 1.578152715s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 10:55:18.304783    4493 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 10:55:18.304838    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:55:18.316091    4493 system_svc.go:56] duration metric: took 11.304975ms WaitForService to wait for kubelet
	I0717 10:55:18.316107    4493 kubeadm.go:582] duration metric: took 16.805246753s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:55:18.316118    4493 node_conditions.go:102] verifying NodePressure condition ...
	I0717 10:55:18.502715    4493 request.go:629] Waited for 186.547114ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.15:8443/api/v1/nodes
	I0717 10:55:18.502771    4493 round_trippers.go:463] GET https://192.169.0.15:8443/api/v1/nodes
	I0717 10:55:18.502830    4493 round_trippers.go:469] Request Headers:
	I0717 10:55:18.502845    4493 round_trippers.go:473]     Accept: application/json, */*
	I0717 10:55:18.502852    4493 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0717 10:55:18.505364    4493 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 10:55:18.505377    4493 round_trippers.go:577] Response Headers:
	I0717 10:55:18.505385    4493 round_trippers.go:580]     Audit-Id: b7f8dc13-51c8-4398-8efb-6f0c8d5fe1b4
	I0717 10:55:18.505389    4493 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 10:55:18.505396    4493 round_trippers.go:580]     Content-Type: application/json
	I0717 10:55:18.505399    4493 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d61fd5c8-af31-4893-a65a-2abfeefeecdf
	I0717 10:55:18.505404    4493 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 05aeda3a-f30d-4610-bdd2-c1d75193c0d2
	I0717 10:55:18.505410    4493 round_trippers.go:580]     Date: Wed, 17 Jul 2024 17:55:18 GMT
	I0717 10:55:18.505638    4493 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1026"},"items":[{"metadata":{"name":"multinode-875000","uid":"cb7d1c54-d051-4957-94a4-8ca4f4edb879","resourceVersion":"880","creationTimestamp":"2024-07-17T17:49:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-875000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"904d419c46be1a7134dbdb5e29deb5c439653f86","minikube.k8s.io/name":"multinode-875000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T10_49_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFie
lds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 15419 chars]
	I0717 10:55:18.506053    4493 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:55:18.506062    4493 node_conditions.go:123] node cpu capacity is 2
	I0717 10:55:18.506068    4493 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:55:18.506071    4493 node_conditions.go:123] node cpu capacity is 2
	I0717 10:55:18.506074    4493 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 10:55:18.506076    4493 node_conditions.go:123] node cpu capacity is 2
	I0717 10:55:18.506079    4493 node_conditions.go:105] duration metric: took 189.952755ms to run NodePressure ...
	I0717 10:55:18.506090    4493 start.go:241] waiting for startup goroutines ...
	I0717 10:55:18.506107    4493 start.go:255] writing updated cluster config ...
	I0717 10:55:18.527533    4493 out.go:177] 
	I0717 10:55:18.548977    4493 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:18.549073    4493 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/config.json ...
	I0717 10:55:18.570746    4493 out.go:177] * Starting "multinode-875000-m03" worker node in "multinode-875000" cluster
	I0717 10:55:18.628649    4493 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:55:18.628682    4493 cache.go:56] Caching tarball of preloaded images
	I0717 10:55:18.628870    4493 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 10:55:18.628889    4493 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:55:18.629014    4493 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/config.json ...
	I0717 10:55:18.629824    4493 start.go:360] acquireMachinesLock for multinode-875000-m03: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:55:18.629924    4493 start.go:364] duration metric: took 76.587µs to acquireMachinesLock for "multinode-875000-m03"
	I0717 10:55:18.629950    4493 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:55:18.629958    4493 fix.go:54] fixHost starting: m03
	I0717 10:55:18.630382    4493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:55:18.630437    4493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:55:18.639596    4493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53234
	I0717 10:55:18.639967    4493 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:55:18.640309    4493 main.go:141] libmachine: Using API Version  1
	I0717 10:55:18.640320    4493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:55:18.640562    4493 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:55:18.640688    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .DriverName
	I0717 10:55:18.640775    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetState
	I0717 10:55:18.640854    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:55:18.640960    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | hyperkit pid from json: 4459
	I0717 10:55:18.641870    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | hyperkit pid 4459 missing from process table
	I0717 10:55:18.641901    4493 fix.go:112] recreateIfNeeded on multinode-875000-m03: state=Stopped err=<nil>
	I0717 10:55:18.641909    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .DriverName
	W0717 10:55:18.641994    4493 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:55:18.662679    4493 out.go:177] * Restarting existing hyperkit VM for "multinode-875000-m03" ...
	I0717 10:55:18.704694    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .Start
	I0717 10:55:18.704928    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:55:18.705049    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/hyperkit.pid
	I0717 10:55:18.705076    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | Using UUID 9f16c9eb-59a4-416c-922e-880fb325e397
	I0717 10:55:18.731073    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | Generated MAC a2:dd:4c:c6:bd:14
	I0717 10:55:18.731094    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-875000
	I0717 10:55:18.731233    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9f16c9eb-59a4-416c-922e-880fb325e397", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b860)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0717 10:55:18.731260    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9f16c9eb-59a4-416c-922e-880fb325e397", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b860)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0717 10:55:18.731329    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9f16c9eb-59a4-416c-922e-880fb325e397", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/multinode-875000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/bzimage,/Users/j
enkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-875000"}
	I0717 10:55:18.731371    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9f16c9eb-59a4-416c-922e-880fb325e397 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/multinode-875000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/mult
inode-875000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-875000"
	I0717 10:55:18.731388    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0717 10:55:18.732842    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 DEBUG: hyperkit: Pid is 4575
	I0717 10:55:18.733381    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | Attempt 0
	I0717 10:55:18.733397    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:55:18.733483    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | hyperkit pid from json: 4575
	I0717 10:55:18.734740    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | Searching for a2:dd:4c:c6:bd:14 in /var/db/dhcpd_leases ...
	I0717 10:55:18.734809    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | Found 16 entries in /var/db/dhcpd_leases!
	I0717 10:55:18.734844    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:de:84:ef:f1:8f:c7 ID:1,de:84:ef:f1:8f:c7 Lease:0x669956d0}
	I0717 10:55:18.734866    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:c1:c6:6d:b5:4e ID:1,92:c1:c6:6d:b5:4e Lease:0x6699568d}
	I0717 10:55:18.734880    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:a2:dd:4c:c6:bd:14 ID:1,a2:dd:4c:c6:bd:14 Lease:0x669804f2}
	I0717 10:55:18.734892    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | Found match: a2:dd:4c:c6:bd:14
	I0717 10:55:18.734898    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetConfigRaw
	I0717 10:55:18.734900    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | IP: 192.169.0.17
	I0717 10:55:18.735577    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetIP
	I0717 10:55:18.735784    4493 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/multinode-875000/config.json ...
	I0717 10:55:18.736385    4493 machine.go:94] provisionDockerMachine start ...
	I0717 10:55:18.736400    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .DriverName
	I0717 10:55:18.736541    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:18.736645    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:18.736777    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:18.736923    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:18.737028    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:18.737169    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:55:18.737328    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0717 10:55:18.737335    4493 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 10:55:18.740480    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0717 10:55:18.748625    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0717 10:55:18.749581    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:55:18.749607    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:55:18.749642    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:55:18.749658    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:55:19.130189    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0717 10:55:19.130205    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0717 10:55:19.244919    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0717 10:55:19.244940    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0717 10:55:19.244950    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0717 10:55:19.244957    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0717 10:55:19.245817    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0717 10:55:19.245830    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0717 10:55:24.518017    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:24 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0717 10:55:24.518034    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:24 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0717 10:55:24.518044    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:24 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0717 10:55:24.541492    4493 main.go:141] libmachine: (multinode-875000-m03) DBG | 2024/07/17 10:55:24 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0717 10:55:29.791395    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 10:55:29.791411    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetMachineName
	I0717 10:55:29.791539    4493 buildroot.go:166] provisioning hostname "multinode-875000-m03"
	I0717 10:55:29.791552    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetMachineName
	I0717 10:55:29.791647    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:29.791738    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:29.791848    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:29.791945    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:29.792076    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:29.792213    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:55:29.792363    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0717 10:55:29.792371    4493 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-875000-m03 && echo "multinode-875000-m03" | sudo tee /etc/hostname
	I0717 10:55:29.851886    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-875000-m03
	
	I0717 10:55:29.851902    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:29.852032    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:29.852125    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:29.852225    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:29.852326    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:29.852459    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:55:29.852609    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0717 10:55:29.852623    4493 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-875000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-875000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-875000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 10:55:29.906344    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 10:55:29.906360    4493 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
	I0717 10:55:29.906369    4493 buildroot.go:174] setting up certificates
	I0717 10:55:29.906375    4493 provision.go:84] configureAuth start
	I0717 10:55:29.906381    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetMachineName
	I0717 10:55:29.906511    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetIP
	I0717 10:55:29.906606    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:29.906696    4493 provision.go:143] copyHostCerts
	I0717 10:55:29.906725    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:55:29.906772    4493 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
	I0717 10:55:29.906778    4493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
	I0717 10:55:29.906974    4493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
	I0717 10:55:29.907207    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:55:29.907238    4493 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
	I0717 10:55:29.907242    4493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
	I0717 10:55:29.907311    4493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
	I0717 10:55:29.907458    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:55:29.907486    4493 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
	I0717 10:55:29.907491    4493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
	I0717 10:55:29.907583    4493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
	I0717 10:55:29.907755    4493 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.multinode-875000-m03 san=[127.0.0.1 192.169.0.17 localhost minikube multinode-875000-m03]
	I0717 10:55:30.133100    4493 provision.go:177] copyRemoteCerts
	I0717 10:55:30.133152    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 10:55:30.133168    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:30.133312    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:30.133411    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:30.133487    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:30.133564    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/id_rsa Username:docker}
	I0717 10:55:30.172522    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 10:55:30.172601    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 10:55:30.199016    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 10:55:30.199089    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 10:55:30.218622    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 10:55:30.218695    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 10:55:30.238961    4493 provision.go:87] duration metric: took 332.569934ms to configureAuth
	I0717 10:55:30.238975    4493 buildroot.go:189] setting minikube options for container-runtime
	I0717 10:55:30.239137    4493 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:30.239151    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .DriverName
	I0717 10:55:30.239286    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:30.239379    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:30.239464    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:30.239546    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:30.239624    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:30.239731    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:55:30.239854    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0717 10:55:30.239861    4493 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 10:55:30.288639    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 10:55:30.288652    4493 buildroot.go:70] root file system type: tmpfs
	I0717 10:55:30.288720    4493 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 10:55:30.288732    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:30.288866    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:30.288964    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:30.289045    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:30.289128    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:30.289245    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:55:30.289386    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0717 10:55:30.289435    4493 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.15"
	Environment="NO_PROXY=192.169.0.15,192.169.0.16"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 10:55:30.348406    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.15
	Environment=NO_PROXY=192.169.0.15,192.169.0.16
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 10:55:30.348425    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:30.348572    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:30.348661    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:30.348756    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:30.348839    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:30.348982    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:55:30.349145    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0717 10:55:30.349158    4493 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 10:55:31.884992    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 10:55:31.885007    4493 machine.go:97] duration metric: took 13.148260509s to provisionDockerMachine
	I0717 10:55:31.885019    4493 start.go:293] postStartSetup for "multinode-875000-m03" (driver="hyperkit")
	I0717 10:55:31.885027    4493 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 10:55:31.885038    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .DriverName
	I0717 10:55:31.885202    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 10:55:31.885213    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:31.885301    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:31.885388    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:31.885478    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:31.885566    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/id_rsa Username:docker}
	I0717 10:55:31.916267    4493 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 10:55:31.919172    4493 command_runner.go:130] > NAME=Buildroot
	I0717 10:55:31.919184    4493 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0717 10:55:31.919188    4493 command_runner.go:130] > ID=buildroot
	I0717 10:55:31.919192    4493 command_runner.go:130] > VERSION_ID=2023.02.9
	I0717 10:55:31.919213    4493 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0717 10:55:31.919404    4493 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 10:55:31.919414    4493 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
	I0717 10:55:31.919495    4493 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
	I0717 10:55:31.919638    4493 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
	I0717 10:55:31.919645    4493 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
	I0717 10:55:31.919804    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 10:55:31.927953    4493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
	I0717 10:55:31.947599    4493 start.go:296] duration metric: took 62.569789ms for postStartSetup
	I0717 10:55:31.947621    4493 fix.go:56] duration metric: took 13.317307309s for fixHost
	I0717 10:55:31.947655    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:31.947813    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:31.947906    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:31.947995    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:31.948094    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:31.948200    4493 main.go:141] libmachine: Using SSH client type: native
	I0717 10:55:31.948331    4493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xda22060] 0xda24dc0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0717 10:55:31.948338    4493 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 10:55:31.998271    4493 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721238931.935250312
	
	I0717 10:55:31.998281    4493 fix.go:216] guest clock: 1721238931.935250312
	I0717 10:55:31.998286    4493 fix.go:229] Guest: 2024-07-17 10:55:31.935250312 -0700 PDT Remote: 2024-07-17 10:55:31.947629 -0700 PDT m=+143.539947732 (delta=-12.378688ms)
	I0717 10:55:31.998305    4493 fix.go:200] guest clock delta is within tolerance: -12.378688ms
	I0717 10:55:31.998310    4493 start.go:83] releasing machines lock for "multinode-875000-m03", held for 13.368017038s
	I0717 10:55:31.998327    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .DriverName
	I0717 10:55:31.998458    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetIP
	I0717 10:55:32.037875    4493 out.go:177] * Found network options:
	I0717 10:55:32.059947    4493 out.go:177]   - NO_PROXY=192.169.0.15,192.169.0.16
	W0717 10:55:32.081831    4493 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:55:32.081867    4493 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:55:32.081887    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .DriverName
	I0717 10:55:32.082744    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .DriverName
	I0717 10:55:32.082998    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .DriverName
	I0717 10:55:32.083119    4493 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 10:55:32.083157    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	W0717 10:55:32.083270    4493 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 10:55:32.083295    4493 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 10:55:32.083350    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:32.083379    4493 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 10:55:32.083397    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHHostname
	I0717 10:55:32.083535    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:32.083563    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHPort
	I0717 10:55:32.083740    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:32.083784    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHKeyPath
	I0717 10:55:32.083929    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/id_rsa Username:docker}
	I0717 10:55:32.083987    4493 main.go:141] libmachine: (multinode-875000-m03) Calling .GetSSHUsername
	I0717 10:55:32.084131    4493 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m03/id_rsa Username:docker}
	I0717 10:55:32.111935    4493 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 10:55:32.111960    4493 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 10:55:32.112015    4493 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 10:55:32.160826    4493 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 10:55:32.161606    4493 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0717 10:55:32.161649    4493 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 10:55:32.161660    4493 start.go:495] detecting cgroup driver to use...
	I0717 10:55:32.161730    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:55:32.185179    4493 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0717 10:55:32.185265    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 10:55:32.194356    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 10:55:32.203631    4493 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 10:55:32.203692    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 10:55:32.216813    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:55:32.229375    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 10:55:32.240069    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 10:55:32.248338    4493 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 10:55:32.256655    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 10:55:32.264843    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 10:55:32.273009    4493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 10:55:32.281296    4493 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 10:55:32.288733    4493 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 10:55:32.288815    4493 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 10:55:32.296275    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:55:32.385477    4493 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 10:55:32.403708    4493 start.go:495] detecting cgroup driver to use...
	I0717 10:55:32.403776    4493 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 10:55:32.419867    4493 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0717 10:55:32.420295    4493 command_runner.go:130] > [Unit]
	I0717 10:55:32.420304    4493 command_runner.go:130] > Description=Docker Application Container Engine
	I0717 10:55:32.420309    4493 command_runner.go:130] > Documentation=https://docs.docker.com
	I0717 10:55:32.420314    4493 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0717 10:55:32.420319    4493 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0717 10:55:32.420329    4493 command_runner.go:130] > StartLimitBurst=3
	I0717 10:55:32.420333    4493 command_runner.go:130] > StartLimitIntervalSec=60
	I0717 10:55:32.420337    4493 command_runner.go:130] > [Service]
	I0717 10:55:32.420340    4493 command_runner.go:130] > Type=notify
	I0717 10:55:32.420344    4493 command_runner.go:130] > Restart=on-failure
	I0717 10:55:32.420347    4493 command_runner.go:130] > Environment=NO_PROXY=192.169.0.15
	I0717 10:55:32.420353    4493 command_runner.go:130] > Environment=NO_PROXY=192.169.0.15,192.169.0.16
	I0717 10:55:32.420360    4493 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0717 10:55:32.420368    4493 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0717 10:55:32.420374    4493 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0717 10:55:32.420380    4493 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0717 10:55:32.420386    4493 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0717 10:55:32.420392    4493 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0717 10:55:32.420400    4493 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0717 10:55:32.420405    4493 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0717 10:55:32.420411    4493 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0717 10:55:32.420414    4493 command_runner.go:130] > ExecStart=
	I0717 10:55:32.420429    4493 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0717 10:55:32.420435    4493 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0717 10:55:32.420441    4493 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0717 10:55:32.420447    4493 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0717 10:55:32.420451    4493 command_runner.go:130] > LimitNOFILE=infinity
	I0717 10:55:32.420456    4493 command_runner.go:130] > LimitNPROC=infinity
	I0717 10:55:32.420460    4493 command_runner.go:130] > LimitCORE=infinity
	I0717 10:55:32.420465    4493 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0717 10:55:32.420469    4493 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0717 10:55:32.420472    4493 command_runner.go:130] > TasksMax=infinity
	I0717 10:55:32.420475    4493 command_runner.go:130] > TimeoutStartSec=0
	I0717 10:55:32.420482    4493 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0717 10:55:32.420485    4493 command_runner.go:130] > Delegate=yes
	I0717 10:55:32.420494    4493 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0717 10:55:32.420499    4493 command_runner.go:130] > KillMode=process
	I0717 10:55:32.420502    4493 command_runner.go:130] > [Install]
	I0717 10:55:32.420505    4493 command_runner.go:130] > WantedBy=multi-user.target
	I0717 10:55:32.420579    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:55:32.431667    4493 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 10:55:32.449419    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 10:55:32.460531    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:55:32.470937    4493 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 10:55:32.494067    4493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 10:55:32.504843    4493 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 10:55:32.519369    4493 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0717 10:55:32.519610    4493 ssh_runner.go:195] Run: which cri-dockerd
	I0717 10:55:32.522315    4493 command_runner.go:130] > /usr/bin/cri-dockerd
	I0717 10:55:32.522496    4493 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 10:55:32.529531    4493 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 10:55:32.542789    4493 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 10:55:32.634151    4493 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 10:55:32.745594    4493 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 10:55:32.745625    4493 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 10:55:32.759807    4493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 10:55:32.847881    4493 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 10:56:33.759281    4493 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0717 10:56:33.759296    4493 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0717 10:56:33.759308    4493 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.909780876s)
	I0717 10:56:33.759377    4493 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0717 10:56:33.768846    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 systemd[1]: Starting Docker Application Container Engine...
	I0717 10:56:33.768860    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:30.542486449Z" level=info msg="Starting up"
	I0717 10:56:33.768873    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:30.543020597Z" level=info msg="containerd not running, starting managed containerd"
	I0717 10:56:33.768888    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:30.543629257Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=497
	I0717 10:56:33.768898    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.563879235Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	I0717 10:56:33.768908    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578639071Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0717 10:56:33.768918    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578688475Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0717 10:56:33.768927    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578734687Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0717 10:56:33.768937    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578744907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0717 10:56:33.768948    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578880671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0717 10:56:33.768965    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578915546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0717 10:56:33.768985    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579089229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0717 10:56:33.768995    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579124372Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0717 10:56:33.769006    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579137516Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0717 10:56:33.769015    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579155509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0717 10:56:33.769025    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579257039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0717 10:56:33.769034    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579442615Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0717 10:56:33.769048    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581063677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0717 10:56:33.769057    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581103793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0717 10:56:33.769143    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581217146Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0717 10:56:33.769156    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581251600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0717 10:56:33.769166    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581368444Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0717 10:56:33.769174    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581415284Z" level=info msg="metadata content store policy set" policy=shared
	I0717 10:56:33.769184    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582705517Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0717 10:56:33.769193    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582728255Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0717 10:56:33.769201    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582738757Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0717 10:56:33.769210    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582749147Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0717 10:56:33.769222    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582757689Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0717 10:56:33.769231    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582813384Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0717 10:56:33.769239    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583020255Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0717 10:56:33.769248    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583090475Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0717 10:56:33.769257    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583101536Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0717 10:56:33.769266    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583109897Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0717 10:56:33.769276    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583118535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0717 10:56:33.769286    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583127458Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0717 10:56:33.769295    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583135620Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0717 10:56:33.769304    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583144927Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0717 10:56:33.769315    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583153844Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0717 10:56:33.769326    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583165258Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0717 10:56:33.769483    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583174183Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0717 10:56:33.769499    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583181925Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0717 10:56:33.769508    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583194324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769517    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583203455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769526    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583212086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769535    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583221149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769544    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583229489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769556    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583238022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769565    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583251699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769574    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583263339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769583    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583271970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769592    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583281243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769602    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583288865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769611    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583296689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769620    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583305583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769629    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583318438Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0717 10:56:33.769637    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583332773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769646    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583341417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769655    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583349074Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0717 10:56:33.769665    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583375670Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0717 10:56:33.769676    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583386642Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0717 10:56:33.769686    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583394389Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0717 10:56:33.769810    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583402289Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0717 10:56:33.769821    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583409057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0717 10:56:33.769836    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583418556Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0717 10:56:33.769845    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583425769Z" level=info msg="NRI interface is disabled by configuration."
	I0717 10:56:33.769854    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583559218Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0717 10:56:33.769861    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583617368Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0717 10:56:33.769870    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583645404Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0717 10:56:33.769877    4493 command_runner.go:130] > Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583678442Z" level=info msg="containerd successfully booted in 0.021002s"
	I0717 10:56:33.769885    4493 command_runner.go:130] > Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.566115906Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0717 10:56:33.769893    4493 command_runner.go:130] > Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.581160310Z" level=info msg="Loading containers: start."
	I0717 10:56:33.769912    4493 command_runner.go:130] > Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.678906471Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0717 10:56:33.769923    4493 command_runner.go:130] > Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.740696250Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0717 10:56:33.769931    4493 command_runner.go:130] > Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.786371404Z" level=info msg="Loading containers: done."
	I0717 10:56:33.769941    4493 command_runner.go:130] > Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.795512822Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0717 10:56:33.769948    4493 command_runner.go:130] > Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.795668358Z" level=info msg="Daemon has completed initialization"
	I0717 10:56:33.769956    4493 command_runner.go:130] > Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.818093328Z" level=info msg="API listen on /var/run/docker.sock"
	I0717 10:56:33.769963    4493 command_runner.go:130] > Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.818285430Z" level=info msg="API listen on [::]:2376"
	I0717 10:56:33.769969    4493 command_runner.go:130] > Jul 17 17:55:31 multinode-875000-m03 systemd[1]: Started Docker Application Container Engine.
	I0717 10:56:33.769976    4493 command_runner.go:130] > Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.813799949Z" level=info msg="Processing signal 'terminated'"
	I0717 10:56:33.769983    4493 command_runner.go:130] > Jul 17 17:55:32 multinode-875000-m03 systemd[1]: Stopping Docker Application Container Engine...
	I0717 10:56:33.769992    4493 command_runner.go:130] > Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.815030335Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0717 10:56:33.770005    4493 command_runner.go:130] > Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.815161263Z" level=info msg="Daemon shutdown complete"
	I0717 10:56:33.770014    4493 command_runner.go:130] > Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.815281374Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0717 10:56:33.770046    4493 command_runner.go:130] > Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.815427332Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0717 10:56:33.770053    4493 command_runner.go:130] > Jul 17 17:55:33 multinode-875000-m03 systemd[1]: docker.service: Deactivated successfully.
	I0717 10:56:33.770059    4493 command_runner.go:130] > Jul 17 17:55:33 multinode-875000-m03 systemd[1]: Stopped Docker Application Container Engine.
	I0717 10:56:33.770066    4493 command_runner.go:130] > Jul 17 17:55:33 multinode-875000-m03 systemd[1]: Starting Docker Application Container Engine...
	I0717 10:56:33.770073    4493 command_runner.go:130] > Jul 17 17:55:33 multinode-875000-m03 dockerd[853]: time="2024-07-17T17:55:33.852812593Z" level=info msg="Starting up"
	I0717 10:56:33.770084    4493 command_runner.go:130] > Jul 17 17:56:33 multinode-875000-m03 dockerd[853]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0717 10:56:33.770091    4493 command_runner.go:130] > Jul 17 17:56:33 multinode-875000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0717 10:56:33.770098    4493 command_runner.go:130] > Jul 17 17:56:33 multinode-875000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0717 10:56:33.770105    4493 command_runner.go:130] > Jul 17 17:56:33 multinode-875000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	I0717 10:56:33.794710    4493 out.go:177] 
	W0717 10:56:33.816416    4493 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 17 17:55:30 multinode-875000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 17:55:30 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:30.542486449Z" level=info msg="Starting up"
	Jul 17 17:55:30 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:30.543020597Z" level=info msg="containerd not running, starting managed containerd"
	Jul 17 17:55:30 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:30.543629257Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=497
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.563879235Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578639071Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578688475Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578734687Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578744907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578880671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.578915546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579089229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579124372Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579137516Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579155509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579257039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.579442615Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581063677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581103793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581217146Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581251600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581368444Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.581415284Z" level=info msg="metadata content store policy set" policy=shared
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582705517Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582728255Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582738757Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582749147Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582757689Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.582813384Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583020255Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583090475Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583101536Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583109897Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583118535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583127458Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583135620Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583144927Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583153844Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583165258Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583174183Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583181925Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583194324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583203455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583212086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583221149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583229489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583238022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583251699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583263339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583271970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583281243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583288865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583296689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583305583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583318438Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583332773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583341417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583349074Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583375670Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583386642Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583394389Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583402289Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583409057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583418556Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583425769Z" level=info msg="NRI interface is disabled by configuration."
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583559218Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583617368Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583645404Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 17 17:55:30 multinode-875000-m03 dockerd[497]: time="2024-07-17T17:55:30.583678442Z" level=info msg="containerd successfully booted in 0.021002s"
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.566115906Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.581160310Z" level=info msg="Loading containers: start."
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.678906471Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.740696250Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.786371404Z" level=info msg="Loading containers: done."
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.795512822Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.795668358Z" level=info msg="Daemon has completed initialization"
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.818093328Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 17:55:31 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:31.818285430Z" level=info msg="API listen on [::]:2376"
	Jul 17 17:55:31 multinode-875000-m03 systemd[1]: Started Docker Application Container Engine.
	Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.813799949Z" level=info msg="Processing signal 'terminated'"
	Jul 17 17:55:32 multinode-875000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.815030335Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.815161263Z" level=info msg="Daemon shutdown complete"
	Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.815281374Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 17 17:55:32 multinode-875000-m03 dockerd[491]: time="2024-07-17T17:55:32.815427332Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 17:55:33 multinode-875000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 17:55:33 multinode-875000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 17:55:33 multinode-875000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 17:55:33 multinode-875000-m03 dockerd[853]: time="2024-07-17T17:55:33.852812593Z" level=info msg="Starting up"
	Jul 17 17:56:33 multinode-875000-m03 dockerd[853]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 17 17:56:33 multinode-875000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 17 17:56:33 multinode-875000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 17 17:56:33 multinode-875000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0717 10:56:33.816540    4493 out.go:239] * 
	W0717 10:56:33.817502    4493 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:56:33.879510    4493 out.go:177] 
	
	
	==> Docker <==
	Jul 17 17:54:12 multinode-875000 dockerd[855]: time="2024-07-17T17:54:12.661365414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:54:12 multinode-875000 dockerd[855]: time="2024-07-17T17:54:12.661426487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:54:12 multinode-875000 dockerd[855]: time="2024-07-17T17:54:12.661534076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:54:12 multinode-875000 dockerd[855]: time="2024-07-17T17:54:12.666077036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:54:12 multinode-875000 dockerd[855]: time="2024-07-17T17:54:12.666169099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:54:12 multinode-875000 dockerd[855]: time="2024-07-17T17:54:12.666194138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:54:12 multinode-875000 dockerd[855]: time="2024-07-17T17:54:12.666457518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:54:12 multinode-875000 cri-dockerd[1101]: time="2024-07-17T17:54:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3e1abd81dd70ede96875805990f576d8319f7edad7a0f27dac28c871a2d78972/resolv.conf as [nameserver 192.169.0.1]"
	Jul 17 17:54:12 multinode-875000 cri-dockerd[1101]: time="2024-07-17T17:54:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7de3d2db3832e94ee5b486d9fa91ebb0404aba3d1391df16a967f1c6cdd6c86f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 17:54:12 multinode-875000 dockerd[855]: time="2024-07-17T17:54:12.979985169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:54:12 multinode-875000 dockerd[855]: time="2024-07-17T17:54:12.980167706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:54:12 multinode-875000 dockerd[855]: time="2024-07-17T17:54:12.980230363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:54:12 multinode-875000 dockerd[855]: time="2024-07-17T17:54:12.980451860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:54:13 multinode-875000 dockerd[855]: time="2024-07-17T17:54:13.002891110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:54:13 multinode-875000 dockerd[855]: time="2024-07-17T17:54:13.002959226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:54:13 multinode-875000 dockerd[855]: time="2024-07-17T17:54:13.002972228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:54:13 multinode-875000 dockerd[855]: time="2024-07-17T17:54:13.003039986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:54:27 multinode-875000 dockerd[849]: time="2024-07-17T17:54:27.609802559Z" level=info msg="ignoring event" container=4cff44cd1c22b50424d6cacaa33d34640d78381c67b6fc559dbb4514a9a2bf8a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 17:54:27 multinode-875000 dockerd[855]: time="2024-07-17T17:54:27.610226861Z" level=info msg="shim disconnected" id=4cff44cd1c22b50424d6cacaa33d34640d78381c67b6fc559dbb4514a9a2bf8a namespace=moby
	Jul 17 17:54:27 multinode-875000 dockerd[855]: time="2024-07-17T17:54:27.610280240Z" level=warning msg="cleaning up after shim disconnected" id=4cff44cd1c22b50424d6cacaa33d34640d78381c67b6fc559dbb4514a9a2bf8a namespace=moby
	Jul 17 17:54:27 multinode-875000 dockerd[855]: time="2024-07-17T17:54:27.610290388Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 17 17:54:39 multinode-875000 dockerd[855]: time="2024-07-17T17:54:39.819657143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 17:54:39 multinode-875000 dockerd[855]: time="2024-07-17T17:54:39.819766449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 17:54:39 multinode-875000 dockerd[855]: time="2024-07-17T17:54:39.819828959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 17:54:39 multinode-875000 dockerd[855]: time="2024-07-17T17:54:39.819973629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	235fce6e1b47d       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   683a8b46f2896       storage-provisioner
	9173dfcdaaa58       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   7de3d2db3832e       busybox-fc5497c4f-kfksv
	f3f4ec8b8ae1b       cbb01a7bd410d                                                                                         2 minutes ago        Running             coredns                   1                   3e1abd81dd70e       coredns-7db6d8ff4d-nlwxm
	ef977d465b788       5cc3abe5717db                                                                                         2 minutes ago        Running             kindnet-cni               1                   64df2f029f435       kindnet-hwkds
	4cff44cd1c22b       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   683a8b46f2896       storage-provisioner
	34942e2687520       53c535741fb44                                                                                         2 minutes ago        Running             kube-proxy                1                   acc6efcba2c1c       kube-proxy-zs8f8
	25004798db78f       e874818b3caac                                                                                         2 minutes ago        Running             kube-controller-manager   1                   0602906c4713d       kube-controller-manager-multinode-875000
	5def6f3cb0d4c       3861cfcd7c04c                                                                                         2 minutes ago        Running             etcd                      1                   baf6efa89595a       etcd-multinode-875000
	8dafcf7bb9f47       7820c83aa1394                                                                                         2 minutes ago        Running             kube-scheduler            1                   5bbac45f36a76       kube-scheduler-multinode-875000
	d09e2c3772132       56ce0fd9fb532                                                                                         2 minutes ago        Running             kube-apiserver            1                   64861cf3ff458       kube-apiserver-multinode-875000
	b851e2454b5fe       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago        Exited              busybox                   0                   fa2418b7effa0       busybox-fc5497c4f-kfksv
	628249f927da3       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   8d5379f364df5       coredns-7db6d8ff4d-nlwxm
	f9b27278d7894       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              6 minutes ago        Exited              kindnet-cni               0                   004a5be3ccef4       kindnet-hwkds
	cdb993aecac10       53c535741fb44                                                                                         6 minutes ago        Exited              kube-proxy                0                   6c2c175018f84       kube-proxy-zs8f8
	fbeb1615ce079       3861cfcd7c04c                                                                                         6 minutes ago        Exited              etcd                      0                   4d352419a7588       etcd-multinode-875000
	2966fb0e7dc18       7820c83aa1394                                                                                         6 minutes ago        Exited              kube-scheduler            0                   3f3c486ee3b83       kube-scheduler-multinode-875000
	6a219499b617a       56ce0fd9fb532                                                                                         6 minutes ago        Exited              kube-apiserver            0                   c6831086186cb       kube-apiserver-multinode-875000
	f441455bef841       e874818b3caac                                                                                         6 minutes ago        Exited              kube-controller-manager   0                   4355a2bd64f70       kube-controller-manager-multinode-875000
	
	
	==> coredns [628249f927da] <==
	[INFO] 10.244.1.2:37180 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000040844s
	[INFO] 10.244.1.2:41318 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000041799s
	[INFO] 10.244.1.2:33851 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000040817s
	[INFO] 10.244.1.2:33575 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000369s
	[INFO] 10.244.1.2:40004 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075074s
	[INFO] 10.244.1.2:43853 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088805s
	[INFO] 10.244.1.2:41962 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000044175s
	[INFO] 10.244.0.3:56836 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116412s
	[INFO] 10.244.0.3:44527 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000053797s
	[INFO] 10.244.0.3:43095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000039662s
	[INFO] 10.244.0.3:51791 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047322s
	[INFO] 10.244.1.2:54442 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111243s
	[INFO] 10.244.1.2:60790 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006834s
	[INFO] 10.244.1.2:36472 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057711s
	[INFO] 10.244.1.2:33587 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083114s
	[INFO] 10.244.0.3:48840 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000056187s
	[INFO] 10.244.0.3:55498 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000042852s
	[INFO] 10.244.0.3:37158 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000040354s
	[INFO] 10.244.0.3:55673 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000074492s
	[INFO] 10.244.1.2:33076 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096443s
	[INFO] 10.244.1.2:34588 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000045111s
	[INFO] 10.244.1.2:43431 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000037358s
	[INFO] 10.244.1.2:57059 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000036127s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f3f4ec8b8ae1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34193 - 15906 "HINFO IN 2102528731917452452.8183080375771683246. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011614592s
	
	
	==> describe nodes <==
	Name:               multinode-875000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-875000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=multinode-875000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T10_49_51_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:49:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-875000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:56:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:54:10 +0000   Wed, 17 Jul 2024 17:49:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:54:10 +0000   Wed, 17 Jul 2024 17:49:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:54:10 +0000   Wed, 17 Jul 2024 17:49:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:54:10 +0000   Wed, 17 Jul 2024 17:54:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.15
	  Hostname:    multinode-875000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 f13e266cb5c94e4299d1782d075f4429
	  System UUID:                0b49495d-0000-0000-b943-8b478d8e6ab6
	  Boot ID:                    375d0bb2-75a9-40f5-a3d5-5d4048d1d0a7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kfksv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  kube-system                 coredns-7db6d8ff4d-nlwxm                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m31s
	  kube-system                 etcd-multinode-875000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m46s
	  kube-system                 kindnet-hwkds                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m31s
	  kube-system                 kube-apiserver-multinode-875000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m46s
	  kube-system                 kube-controller-manager-multinode-875000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m46s
	  kube-system                 kube-proxy-zs8f8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  kube-system                 kube-scheduler-multinode-875000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m46s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m29s                  kube-proxy       
	  Normal  Starting                 2m37s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m46s                  kubelet          Node multinode-875000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m46s                  kubelet          Node multinode-875000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m46s                  kubelet          Node multinode-875000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m46s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           6m32s                  node-controller  Node multinode-875000 event: Registered Node multinode-875000 in Controller
	  Normal  NodeReady                6m16s                  kubelet          Node multinode-875000 status is now: NodeReady
	  Normal  Starting                 2m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m43s (x8 over 2m43s)  kubelet          Node multinode-875000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m43s (x8 over 2m43s)  kubelet          Node multinode-875000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m43s (x7 over 2m43s)  kubelet          Node multinode-875000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m28s                  node-controller  Node multinode-875000 event: Registered Node multinode-875000 in Controller
	
	
	Name:               multinode-875000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-875000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=multinode-875000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T10_55_01_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:55:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-875000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:56:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:55:16 +0000   Wed, 17 Jul 2024 17:55:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:55:16 +0000   Wed, 17 Jul 2024 17:55:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:55:16 +0000   Wed, 17 Jul 2024 17:55:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:55:16 +0000   Wed, 17 Jul 2024 17:55:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.16
	  Hostname:    multinode-875000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 94b9bf76ccd24c9394b7b9f5b676ba70
	  System UUID:                25304156-0000-0000-982c-d8f8ac747f78
	  Boot ID:                    c7b3caa5-3958-402b-9c9e-543e25317c0d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lskpj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kindnet-pj9kh              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m53s
	  kube-system                 kube-proxy-tp2zz           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m46s                  kube-proxy  
	  Normal  Starting                 91s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  5m53s (x2 over 5m53s)  kubelet     Node multinode-875000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m53s (x2 over 5m53s)  kubelet     Node multinode-875000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m53s (x2 over 5m53s)  kubelet     Node multinode-875000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m53s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m31s                  kubelet     Node multinode-875000-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  94s (x2 over 94s)      kubelet     Node multinode-875000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s (x2 over 94s)      kubelet     Node multinode-875000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s (x2 over 94s)      kubelet     Node multinode-875000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  94s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                79s                    kubelet     Node multinode-875000-m02 status is now: NodeReady
	
	
	Name:               multinode-875000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-875000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=multinode-875000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T10_52_29_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:52:28 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-875000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:52:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Jul 2024 17:52:47 +0000   Wed, 17 Jul 2024 17:54:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Jul 2024 17:52:47 +0000   Wed, 17 Jul 2024 17:54:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Jul 2024 17:52:47 +0000   Wed, 17 Jul 2024 17:54:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Jul 2024 17:52:47 +0000   Wed, 17 Jul 2024 17:54:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.17
	  Hostname:    multinode-875000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 844f50470c1d4271b5caccff021b8492
	  System UUID:                9f16416c-0000-0000-922e-880fb325e397
	  Boot ID:                    39dfb639-411c-4702-b2d8-bdb37f8f7b32
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fnltt       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m2s
	  kube-system                 kube-proxy-dnn4j    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m55s                kube-proxy       
	  Normal  Starting                 4m4s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  5m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m2s (x2 over 5m2s)  kubelet          Node multinode-875000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m2s (x2 over 5m2s)  kubelet          Node multinode-875000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m2s (x2 over 5m2s)  kubelet          Node multinode-875000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m39s                kubelet          Node multinode-875000-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  4m7s (x2 over 4m7s)  kubelet          Node multinode-875000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x2 over 4m7s)  kubelet          Node multinode-875000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x2 over 4m7s)  kubelet          Node multinode-875000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node multinode-875000-m03 event: Registered Node multinode-875000-m03 in Controller
	  Normal  NodeReady                3m48s                kubelet          Node multinode-875000-m03 status is now: NodeReady
	  Normal  RegisteredNode           2m28s                node-controller  Node multinode-875000-m03 event: Registered Node multinode-875000-m03 in Controller
	  Normal  NodeNotReady             108s                 node-controller  Node multinode-875000-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +5.342898] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006731] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.497734] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.239053] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +25.182519] systemd-fstab-generator[491]: Ignoring "noauto" option for root device
	[  +0.087513] systemd-fstab-generator[503]: Ignoring "noauto" option for root device
	[  +1.823330] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.254870] systemd-fstab-generator[815]: Ignoring "noauto" option for root device
	[  +0.099590] systemd-fstab-generator[827]: Ignoring "noauto" option for root device
	[  +0.110965] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +2.431907] systemd-fstab-generator[1054]: Ignoring "noauto" option for root device
	[  +0.100753] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	[  +0.108822] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	[  +0.048875] kauditd_printk_skb: 239 callbacks suppressed
	[  +0.069519] systemd-fstab-generator[1093]: Ignoring "noauto" option for root device
	[  +0.392205] systemd-fstab-generator[1215]: Ignoring "noauto" option for root device
	[  +2.225285] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +4.685000] kauditd_printk_skb: 128 callbacks suppressed
	[  +2.448262] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device
	[Jul17 17:54] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.191339] kauditd_printk_skb: 2 callbacks suppressed
	[ +14.609112] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [5def6f3cb0d4] <==
	{"level":"info","ts":"2024-07-17T17:53:53.7836Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7bf99276c76e9898","local-member-id":"4d34c1a0c90b9650","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T17:53:53.783953Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T17:53:53.783212Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T17:53:53.785Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4d34c1a0c90b9650","initial-advertise-peer-urls":["https://192.169.0.15:2380"],"listen-peer-urls":["https://192.169.0.15:2380"],"advertise-client-urls":["https://192.169.0.15:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.15:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T17:53:53.78504Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T17:53:53.78323Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.15:2380"}
	{"level":"info","ts":"2024-07-17T17:53:53.785068Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.15:2380"}
	{"level":"info","ts":"2024-07-17T17:53:53.783404Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T17:53:53.785533Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T17:53:53.7859Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T17:53:53.788303Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"4d34c1a0c90b9650","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-07-17T17:53:54.058323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d34c1a0c90b9650 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T17:53:54.058388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d34c1a0c90b9650 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:53:54.058407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d34c1a0c90b9650 received MsgPreVoteResp from 4d34c1a0c90b9650 at term 2"}
	{"level":"info","ts":"2024-07-17T17:53:54.058418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d34c1a0c90b9650 became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T17:53:54.058423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d34c1a0c90b9650 received MsgVoteResp from 4d34c1a0c90b9650 at term 3"}
	{"level":"info","ts":"2024-07-17T17:53:54.058429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d34c1a0c90b9650 became leader at term 3"}
	{"level":"info","ts":"2024-07-17T17:53:54.058433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4d34c1a0c90b9650 elected leader 4d34c1a0c90b9650 at term 3"}
	{"level":"info","ts":"2024-07-17T17:53:54.066715Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4d34c1a0c90b9650","local-member-attributes":"{Name:multinode-875000 ClientURLs:[https://192.169.0.15:2379]}","request-path":"/0/members/4d34c1a0c90b9650/attributes","cluster-id":"7bf99276c76e9898","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T17:53:54.066798Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T17:53:54.066937Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T17:53:54.068631Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T17:53:54.072961Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T17:53:54.074344Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T17:53:54.087693Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.15:2379"}
	
	
	==> etcd [fbeb1615ce07] <==
	{"level":"info","ts":"2024-07-17T17:49:46.321068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d34c1a0c90b9650 became candidate at term 2"}
	{"level":"info","ts":"2024-07-17T17:49:46.321072Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d34c1a0c90b9650 received MsgVoteResp from 4d34c1a0c90b9650 at term 2"}
	{"level":"info","ts":"2024-07-17T17:49:46.321078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d34c1a0c90b9650 became leader at term 2"}
	{"level":"info","ts":"2024-07-17T17:49:46.321083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4d34c1a0c90b9650 elected leader 4d34c1a0c90b9650 at term 2"}
	{"level":"info","ts":"2024-07-17T17:49:46.329301Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4d34c1a0c90b9650","local-member-attributes":"{Name:multinode-875000 ClientURLs:[https://192.169.0.15:2379]}","request-path":"/0/members/4d34c1a0c90b9650/attributes","cluster-id":"7bf99276c76e9898","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T17:49:46.329344Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T17:49:46.3295Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T17:49:46.329863Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T17:49:46.330106Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T17:49:46.330272Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T17:49:46.330257Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7bf99276c76e9898","local-member-id":"4d34c1a0c90b9650","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T17:49:46.330498Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T17:49:46.330614Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T17:49:46.3317Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T17:49:46.333177Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.15:2379"}
	{"level":"info","ts":"2024-07-17T17:53:00.52027Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-17T17:53:00.5203Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-875000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.15:2380"],"advertise-client-urls":["https://192.169.0.15:2379"]}
	{"level":"warn","ts":"2024-07-17T17:53:00.520374Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T17:53:00.520432Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T17:53:00.542223Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.15:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T17:53:00.542249Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.15:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T17:53:00.542306Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4d34c1a0c90b9650","current-leader-member-id":"4d34c1a0c90b9650"}
	{"level":"info","ts":"2024-07-17T17:53:00.544943Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.15:2380"}
	{"level":"info","ts":"2024-07-17T17:53:00.545019Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.15:2380"}
	{"level":"info","ts":"2024-07-17T17:53:00.545027Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-875000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.15:2380"],"advertise-client-urls":["https://192.169.0.15:2379"]}
	
	
	==> kernel <==
	 17:56:36 up 3 min,  0 users,  load average: 0.19, 0.23, 0.10
	Linux multinode-875000 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ef977d465b78] <==
	I0717 17:55:48.767860       1 main.go:326] Node multinode-875000-m03 has CIDR [10.244.3.0/24] 
	I0717 17:55:58.766400       1 main.go:299] Handling node with IPs: map[192.169.0.15:{}]
	I0717 17:55:58.766509       1 main.go:303] handling current node
	I0717 17:55:58.766527       1 main.go:299] Handling node with IPs: map[192.169.0.16:{}]
	I0717 17:55:58.766537       1 main.go:326] Node multinode-875000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:55:58.766858       1 main.go:299] Handling node with IPs: map[192.169.0.17:{}]
	I0717 17:55:58.766966       1 main.go:326] Node multinode-875000-m03 has CIDR [10.244.3.0/24] 
	I0717 17:56:08.775113       1 main.go:299] Handling node with IPs: map[192.169.0.15:{}]
	I0717 17:56:08.775133       1 main.go:303] handling current node
	I0717 17:56:08.775143       1 main.go:299] Handling node with IPs: map[192.169.0.16:{}]
	I0717 17:56:08.775146       1 main.go:326] Node multinode-875000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:56:08.775252       1 main.go:299] Handling node with IPs: map[192.169.0.17:{}]
	I0717 17:56:08.775337       1 main.go:326] Node multinode-875000-m03 has CIDR [10.244.3.0/24] 
	I0717 17:56:18.769358       1 main.go:299] Handling node with IPs: map[192.169.0.16:{}]
	I0717 17:56:18.769484       1 main.go:326] Node multinode-875000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:56:18.769744       1 main.go:299] Handling node with IPs: map[192.169.0.17:{}]
	I0717 17:56:18.769885       1 main.go:326] Node multinode-875000-m03 has CIDR [10.244.3.0/24] 
	I0717 17:56:18.770190       1 main.go:299] Handling node with IPs: map[192.169.0.15:{}]
	I0717 17:56:18.770470       1 main.go:303] handling current node
	I0717 17:56:28.774019       1 main.go:299] Handling node with IPs: map[192.169.0.15:{}]
	I0717 17:56:28.774155       1 main.go:303] handling current node
	I0717 17:56:28.774194       1 main.go:299] Handling node with IPs: map[192.169.0.16:{}]
	I0717 17:56:28.774221       1 main.go:326] Node multinode-875000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:56:28.774614       1 main.go:299] Handling node with IPs: map[192.169.0.17:{}]
	I0717 17:56:28.774932       1 main.go:326] Node multinode-875000-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f9b27278d789] <==
	I0717 17:52:29.210376       1 main.go:299] Handling node with IPs: map[192.169.0.15:{}]
	I0717 17:52:29.210453       1 main.go:303] handling current node
	I0717 17:52:29.210472       1 main.go:299] Handling node with IPs: map[192.169.0.16:{}]
	I0717 17:52:29.210480       1 main.go:326] Node multinode-875000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:52:29.211074       1 main.go:299] Handling node with IPs: map[192.169.0.17:{}]
	I0717 17:52:29.211138       1 main.go:326] Node multinode-875000-m03 has CIDR [10.244.3.0/24] 
	I0717 17:52:29.211366       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.169.0.17 Flags: [] Table: 0} 
	I0717 17:52:39.210549       1 main.go:299] Handling node with IPs: map[192.169.0.15:{}]
	I0717 17:52:39.210621       1 main.go:303] handling current node
	I0717 17:52:39.210666       1 main.go:299] Handling node with IPs: map[192.169.0.16:{}]
	I0717 17:52:39.210680       1 main.go:326] Node multinode-875000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:52:39.211143       1 main.go:299] Handling node with IPs: map[192.169.0.17:{}]
	I0717 17:52:39.211205       1 main.go:326] Node multinode-875000-m03 has CIDR [10.244.3.0/24] 
	I0717 17:52:49.211251       1 main.go:299] Handling node with IPs: map[192.169.0.15:{}]
	I0717 17:52:49.211656       1 main.go:303] handling current node
	I0717 17:52:49.211856       1 main.go:299] Handling node with IPs: map[192.169.0.16:{}]
	I0717 17:52:49.211995       1 main.go:326] Node multinode-875000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:52:49.212618       1 main.go:299] Handling node with IPs: map[192.169.0.17:{}]
	I0717 17:52:49.212685       1 main.go:326] Node multinode-875000-m03 has CIDR [10.244.3.0/24] 
	I0717 17:52:59.218584       1 main.go:299] Handling node with IPs: map[192.169.0.16:{}]
	I0717 17:52:59.218632       1 main.go:326] Node multinode-875000-m02 has CIDR [10.244.1.0/24] 
	I0717 17:52:59.218783       1 main.go:299] Handling node with IPs: map[192.169.0.17:{}]
	I0717 17:52:59.218811       1 main.go:326] Node multinode-875000-m03 has CIDR [10.244.3.0/24] 
	I0717 17:52:59.218857       1 main.go:299] Handling node with IPs: map[192.169.0.15:{}]
	I0717 17:52:59.218883       1 main.go:303] handling current node
	
	
	==> kube-apiserver [6a219499b617] <==
	W0717 17:53:01.534298       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.534346       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.534446       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.534602       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.534653       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.534746       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.534798       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.534906       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.534963       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.535079       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.535218       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.535321       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.535403       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.535505       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.535586       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.535697       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.535721       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.535594       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.535523       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.535404       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.535334       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.535707       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.535611       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.535778       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 17:53:01.535235       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d09e2c377213] <==
	I0717 17:53:55.533045       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 17:53:55.533416       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 17:53:55.542340       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 17:53:55.547037       1 aggregator.go:165] initial CRD sync complete...
	I0717 17:53:55.547066       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 17:53:55.547072       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 17:53:55.547076       1 cache.go:39] Caches are synced for autoregister controller
	I0717 17:53:55.553091       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 17:53:55.562081       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 17:53:55.562156       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 17:53:55.565240       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 17:53:55.575013       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 17:53:55.577856       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 17:53:55.578053       1 policy_source.go:224] refreshing policies
	I0717 17:53:55.599511       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 17:53:55.633299       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 17:53:56.400926       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0717 17:53:56.612332       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.15]
	I0717 17:53:56.613219       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 17:53:56.615940       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 17:53:57.546608       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 17:53:57.690476       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 17:53:57.703203       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 17:53:57.746472       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 17:53:57.751876       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [25004798db78] <==
	I0717 17:54:13.156598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.186154ms"
	I0717 17:54:13.158554       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.995µs"
	I0717 17:54:13.171585       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.174µs"
	I0717 17:54:13.189750       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="11.542329ms"
	I0717 17:54:13.190076       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.002µs"
	I0717 17:54:47.829232       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-875000-m03"
	I0717 17:54:47.868462       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.551454ms"
	I0717 17:54:47.868648       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.487µs"
	I0717 17:54:57.176778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.988379ms"
	I0717 17:54:57.176914       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.72µs"
	I0717 17:54:57.187112       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.005774ms"
	I0717 17:54:57.187308       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.975µs"
	I0717 17:54:57.189569       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.2µs"
	I0717 17:55:01.230622       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-875000-m02\" does not exist"
	I0717 17:55:01.236158       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-875000-m02" podCIDRs=["10.244.1.0/24"]
	I0717 17:55:03.127043       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.976µs"
	I0717 17:55:16.461025       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-875000-m02"
	I0717 17:55:16.471273       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.739µs"
	I0717 17:55:27.190897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.71µs"
	I0717 17:55:27.194703       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.622µs"
	I0717 17:55:27.202076       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.267µs"
	I0717 17:55:27.327347       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.071µs"
	I0717 17:55:27.329082       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.776µs"
	I0717 17:55:28.351620       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.143054ms"
	I0717 17:55:28.351951       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.019µs"
	
	
	==> kube-controller-manager [f441455bef84] <==
	I0717 17:50:20.967090       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="10.55528ms"
	I0717 17:50:20.967662       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.85µs"
	I0717 17:50:23.709402       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0717 17:50:42.202206       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-875000-m02\" does not exist"
	I0717 17:50:42.215042       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-875000-m02" podCIDRs=["10.244.1.0/24"]
	I0717 17:50:43.713762       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-875000-m02"
	I0717 17:51:04.765997       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-875000-m02"
	I0717 17:51:06.946127       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.451678ms"
	I0717 17:51:06.951845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.558882ms"
	I0717 17:51:06.952385       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.96µs"
	I0717 17:51:06.954161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.621µs"
	I0717 17:51:09.205302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.76888ms"
	I0717 17:51:09.205356       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.778µs"
	I0717 17:51:09.367704       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.965892ms"
	I0717 17:51:09.367793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.288µs"
	I0717 17:51:33.816400       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-875000-m02"
	I0717 17:51:33.817055       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-875000-m03\" does not exist"
	I0717 17:51:33.828801       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-875000-m03" podCIDRs=["10.244.2.0/24"]
	I0717 17:51:38.728818       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-875000-m03"
	I0717 17:51:57.010671       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-875000-m02"
	I0717 17:52:28.030725       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-875000-m02"
	I0717 17:52:28.927440       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-875000-m03\" does not exist"
	I0717 17:52:28.927687       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-875000-m02"
	I0717 17:52:28.934011       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-875000-m03" podCIDRs=["10.244.3.0/24"]
	I0717 17:52:47.238171       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-875000-m02"
	
	
	==> kube-proxy [34942e268752] <==
	I0717 17:53:57.647503       1 server_linux.go:69] "Using iptables proxy"
	I0717 17:53:57.669650       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.15"]
	I0717 17:53:57.763839       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:53:57.763895       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:53:57.763910       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:53:57.771880       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:53:57.772232       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:53:57.772292       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:53:57.774074       1 config.go:192] "Starting service config controller"
	I0717 17:53:57.774299       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:53:57.774356       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:53:57.774382       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:53:57.776426       1 config.go:319] "Starting node config controller"
	I0717 17:53:57.776451       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:53:57.875469       1 shared_informer.go:320] Caches are synced for service config
	I0717 17:53:57.876004       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 17:53:57.879467       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [cdb993aecac1] <==
	I0717 17:50:06.563558       1 server_linux.go:69] "Using iptables proxy"
	I0717 17:50:06.569343       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.15"]
	I0717 17:50:06.602244       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:50:06.602376       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:50:06.602435       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:50:06.604861       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:50:06.605087       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:50:06.605189       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:50:06.605854       1 config.go:192] "Starting service config controller"
	I0717 17:50:06.605964       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:50:06.606018       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:50:06.606063       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:50:06.606730       1 config.go:319] "Starting node config controller"
	I0717 17:50:06.607392       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:50:06.706374       1 shared_informer.go:320] Caches are synced for service config
	I0717 17:50:06.706497       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 17:50:06.708217       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2966fb0e7dc1] <==
	E0717 17:49:47.620587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 17:49:47.620621       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 17:49:47.620681       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 17:49:47.620738       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 17:49:47.620749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 17:49:47.620924       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 17:49:47.620956       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 17:49:47.625066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 17:49:47.625107       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 17:49:47.625194       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 17:49:47.625226       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 17:49:48.446979       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 17:49:48.447090       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 17:49:48.460661       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 17:49:48.460709       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 17:49:48.470744       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 17:49:48.470941       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 17:49:48.499496       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 17:49:48.499854       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 17:49:48.554238       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 17:49:48.554431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 17:49:48.650651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 17:49:48.652844       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0717 17:49:49.011805       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 17:53:00.567200       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8dafcf7bb9f4] <==
	I0717 17:53:54.174484       1 serving.go:380] Generated self-signed cert in-memory
	W0717 17:53:55.456265       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 17:53:55.456354       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 17:53:55.456376       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 17:53:55.456388       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 17:53:55.539907       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 17:53:55.539986       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:53:55.543538       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 17:53:55.547268       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 17:53:55.547324       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 17:53:55.547409       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 17:53:55.648356       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 17:54:04 multinode-875000 kubelet[1358]: E0717 17:54:04.359494    1358 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d9e6c103-3eba-4549-b327-23c87ce480cd-config-volume podName:d9e6c103-3eba-4549-b327-23c87ce480cd nodeName:}" failed. No retries permitted until 2024-07-17 17:54:12.359483499 +0000 UTC m=+19.705712257 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d9e6c103-3eba-4549-b327-23c87ce480cd-config-volume") pod "coredns-7db6d8ff4d-nlwxm" (UID: "d9e6c103-3eba-4549-b327-23c87ce480cd") : object "kube-system"/"coredns" not registered
	Jul 17 17:54:04 multinode-875000 kubelet[1358]: E0717 17:54:04.460511    1358 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jul 17 17:54:04 multinode-875000 kubelet[1358]: E0717 17:54:04.460587    1358 projected.go:200] Error preparing data for projected volume kube-api-access-29r42 for pod default/busybox-fc5497c4f-kfksv: object "default"/"kube-root-ca.crt" not registered
	Jul 17 17:54:04 multinode-875000 kubelet[1358]: E0717 17:54:04.460646    1358 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f4a49e79-e702-46bf-977d-31fd708bc824-kube-api-access-29r42 podName:f4a49e79-e702-46bf-977d-31fd708bc824 nodeName:}" failed. No retries permitted until 2024-07-17 17:54:12.460628933 +0000 UTC m=+19.806857710 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-29r42" (UniqueName: "kubernetes.io/projected/f4a49e79-e702-46bf-977d-31fd708bc824-kube-api-access-29r42") pod "busybox-fc5497c4f-kfksv" (UID: "f4a49e79-e702-46bf-977d-31fd708bc824") : object "default"/"kube-root-ca.crt" not registered
	Jul 17 17:54:04 multinode-875000 kubelet[1358]: E0717 17:54:04.778077    1358 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-nlwxm" podUID="d9e6c103-3eba-4549-b327-23c87ce480cd"
	Jul 17 17:54:04 multinode-875000 kubelet[1358]: E0717 17:54:04.779087    1358 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-kfksv" podUID="f4a49e79-e702-46bf-977d-31fd708bc824"
	Jul 17 17:54:06 multinode-875000 kubelet[1358]: E0717 17:54:06.777578    1358 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-kfksv" podUID="f4a49e79-e702-46bf-977d-31fd708bc824"
	Jul 17 17:54:06 multinode-875000 kubelet[1358]: E0717 17:54:06.778031    1358 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-nlwxm" podUID="d9e6c103-3eba-4549-b327-23c87ce480cd"
	Jul 17 17:54:08 multinode-875000 kubelet[1358]: E0717 17:54:08.779116    1358 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-nlwxm" podUID="d9e6c103-3eba-4549-b327-23c87ce480cd"
	Jul 17 17:54:08 multinode-875000 kubelet[1358]: E0717 17:54:08.779687    1358 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-kfksv" podUID="f4a49e79-e702-46bf-977d-31fd708bc824"
	Jul 17 17:54:10 multinode-875000 kubelet[1358]: I0717 17:54:10.102644    1358 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Jul 17 17:54:28 multinode-875000 kubelet[1358]: I0717 17:54:28.265210    1358 scope.go:117] "RemoveContainer" containerID="29731a7ae130b312621fe222f93973f4dbf458027b94f93c4c037f585e755f48"
	Jul 17 17:54:28 multinode-875000 kubelet[1358]: I0717 17:54:28.265457    1358 scope.go:117] "RemoveContainer" containerID="4cff44cd1c22b50424d6cacaa33d34640d78381c67b6fc559dbb4514a9a2bf8a"
	Jul 17 17:54:28 multinode-875000 kubelet[1358]: E0717 17:54:28.265560    1358 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2bf95484-4db9-4dc1-80b0-b4a35569c9af)\"" pod="kube-system/storage-provisioner" podUID="2bf95484-4db9-4dc1-80b0-b4a35569c9af"
	Jul 17 17:54:39 multinode-875000 kubelet[1358]: I0717 17:54:39.778147    1358 scope.go:117] "RemoveContainer" containerID="4cff44cd1c22b50424d6cacaa33d34640d78381c67b6fc559dbb4514a9a2bf8a"
	Jul 17 17:54:52 multinode-875000 kubelet[1358]: E0717 17:54:52.821348    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:54:52 multinode-875000 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:54:52 multinode-875000 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:54:52 multinode-875000 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:54:52 multinode-875000 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:55:52 multinode-875000 kubelet[1358]: E0717 17:55:52.822491    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:55:52 multinode-875000 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:55:52 multinode-875000 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:55:52 multinode-875000 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:55:52 multinode-875000 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-875000 -n multinode-875000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-875000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (228.39s)

                                                
                                    

Test pass (306/338)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.86
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.30.2/json-events 7.17
13 TestDownloadOnly/v1.30.2/preload-exists 0
16 TestDownloadOnly/v1.30.2/kubectl 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.31
18 TestDownloadOnly/v1.30.2/DeleteAll 0.23
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.21
21 TestDownloadOnly/v1.31.0-beta.0/json-events 7.51
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.29
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.21
30 TestBinaryMirror 0.94
31 TestOffline 99.24
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.21
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
36 TestAddons/Setup 212.79
38 TestAddons/parallel/Registry 15.6
39 TestAddons/parallel/Ingress 20.23
40 TestAddons/parallel/InspektorGadget 11.53
41 TestAddons/parallel/MetricsServer 5.51
42 TestAddons/parallel/HelmTiller 9.93
44 TestAddons/parallel/CSI 42.51
45 TestAddons/parallel/Headlamp 12.96
46 TestAddons/parallel/CloudSpanner 6.41
47 TestAddons/parallel/LocalPath 58.5
48 TestAddons/parallel/NvidiaDevicePlugin 5.35
49 TestAddons/parallel/Yakd 5
50 TestAddons/parallel/Volcano 40.22
53 TestAddons/serial/GCPAuth/Namespaces 0.1
54 TestAddons/StoppedEnableDisable 5.92
55 TestCertOptions 53.09
56 TestCertExpiration 261.34
57 TestDockerFlags 39.64
58 TestForceSystemdFlag 38.05
59 TestForceSystemdEnv 156.73
62 TestHyperKitDriverInstallOrUpdate 9.01
65 TestErrorSpam/setup 37.22
66 TestErrorSpam/start 1.37
67 TestErrorSpam/status 0.5
68 TestErrorSpam/pause 1.38
69 TestErrorSpam/unpause 1.36
70 TestErrorSpam/stop 155.85
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 93.43
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 41.3
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.05
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.98
82 TestFunctional/serial/CacheCmd/cache/add_local 1.41
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
84 TestFunctional/serial/CacheCmd/cache/list 0.08
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.1
87 TestFunctional/serial/CacheCmd/cache/delete 0.16
88 TestFunctional/serial/MinikubeKubectlCmd 1.13
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.45
90 TestFunctional/serial/ExtraConfig 41.22
91 TestFunctional/serial/ComponentHealth 0.05
92 TestFunctional/serial/LogsCmd 2.63
93 TestFunctional/serial/LogsFileCmd 2.71
94 TestFunctional/serial/InvalidService 4.15
96 TestFunctional/parallel/ConfigCmd 0.51
97 TestFunctional/parallel/DashboardCmd 9.44
98 TestFunctional/parallel/DryRun 1.33
99 TestFunctional/parallel/InternationalLanguage 0.72
100 TestFunctional/parallel/StatusCmd 0.5
104 TestFunctional/parallel/ServiceCmdConnect 6.56
105 TestFunctional/parallel/AddonsCmd 0.22
106 TestFunctional/parallel/PersistentVolumeClaim 26.33
108 TestFunctional/parallel/SSHCmd 0.31
109 TestFunctional/parallel/CpCmd 1.04
110 TestFunctional/parallel/MySQL 24.92
111 TestFunctional/parallel/FileSync 0.25
112 TestFunctional/parallel/CertSync 1.02
116 TestFunctional/parallel/NodeLabels 0.05
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.14
120 TestFunctional/parallel/License 0.47
121 TestFunctional/parallel/Version/short 0.1
122 TestFunctional/parallel/Version/components 0.47
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.17
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.16
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.16
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.16
127 TestFunctional/parallel/ImageCommands/ImageBuild 2.07
128 TestFunctional/parallel/ImageCommands/Setup 1.83
129 TestFunctional/parallel/DockerEnv/bash 0.61
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.31
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.95
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.62
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.42
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.4
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.68
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.52
140 TestFunctional/parallel/ServiceCmd/DeployApp 22.11
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.37
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.13
146 TestFunctional/parallel/ServiceCmd/List 0.38
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.25
149 TestFunctional/parallel/ServiceCmd/Format 0.26
150 TestFunctional/parallel/ServiceCmd/URL 0.29
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
155 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.26
158 TestFunctional/parallel/ProfileCmd/profile_list 0.26
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
160 TestFunctional/parallel/MountCmd/any-port 5.85
161 TestFunctional/parallel/MountCmd/specific-port 1.77
162 TestFunctional/parallel/MountCmd/VerifyCleanup 1.95
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.02
169 TestMultiControlPlane/serial/StartCluster 312.9
170 TestMultiControlPlane/serial/DeployApp 5.72
171 TestMultiControlPlane/serial/PingHostFromPods 1.27
172 TestMultiControlPlane/serial/AddWorkerNode 48.99
173 TestMultiControlPlane/serial/NodeLabels 0.05
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 2.87
175 TestMultiControlPlane/serial/CopyFile 9.31
176 TestMultiControlPlane/serial/StopSecondaryNode 8.71
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.26
178 TestMultiControlPlane/serial/RestartSecondaryNode 39.57
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.33
193 TestJSONOutput/start/Command 206.64
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Command 0.48
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Command 0.45
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 8.34
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.57
221 TestMainNoArgs 0.08
222 TestMinikubeProfile 90.51
225 TestMountStart/serial/StartWithMountFirst 21.6
229 TestMultiNode/serial/FreshStart2Nodes 109.55
230 TestMultiNode/serial/DeployApp2Nodes 4.22
231 TestMultiNode/serial/PingHostFrom2Pods 0.9
232 TestMultiNode/serial/AddNode 47.74
233 TestMultiNode/serial/MultiNodeLabels 0.05
234 TestMultiNode/serial/ProfileList 0.17
235 TestMultiNode/serial/CopyFile 5.25
236 TestMultiNode/serial/StopNode 2.82
237 TestMultiNode/serial/StartAfterStop 41.76
239 TestMultiNode/serial/DeleteNode 8.18
240 TestMultiNode/serial/StopMultiNode 16.81
241 TestMultiNode/serial/RestartMultiNode 123.68
242 TestMultiNode/serial/ValidateNameConflict 163.16
246 TestPreload 140.65
248 TestScheduledStopUnix 223.79
249 TestSkaffold 114.04
252 TestRunningBinaryUpgrade 86.93
254 TestKubernetesUpgrade 124.57
267 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.46
268 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.96
269 TestStoppedBinaryUpgrade/Setup 1.13
270 TestStoppedBinaryUpgrade/Upgrade 88.94
272 TestPause/serial/Start 58.31
273 TestPause/serial/SecondStartNoReconfiguration 41.11
274 TestStoppedBinaryUpgrade/MinikubeLogs 3.08
283 TestNoKubernetes/serial/StartNoK8sWithVersion 0.5
284 TestNoKubernetes/serial/StartWithK8s 39.81
285 TestPause/serial/Pause 0.57
286 TestPause/serial/VerifyStatus 0.16
287 TestPause/serial/Unpause 0.53
288 TestPause/serial/PauseAgain 0.6
289 TestPause/serial/DeletePaused 5.24
290 TestPause/serial/VerifyDeletedResources 0.17
291 TestNetworkPlugins/group/auto/Start 93.39
292 TestNoKubernetes/serial/StartWithStopK8s 17.57
293 TestNoKubernetes/serial/Start 20.82
294 TestNoKubernetes/serial/VerifyK8sNotRunning 0.13
295 TestNoKubernetes/serial/ProfileList 0.5
296 TestNoKubernetes/serial/Stop 2.45
297 TestNoKubernetes/serial/StartNoArgs 19.37
298 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.13
299 TestNetworkPlugins/group/calico/Start 196.25
300 TestNetworkPlugins/group/auto/KubeletFlags 0.15
301 TestNetworkPlugins/group/auto/NetCatPod 12.15
302 TestNetworkPlugins/group/auto/DNS 0.12
303 TestNetworkPlugins/group/auto/Localhost 0.1
304 TestNetworkPlugins/group/auto/HairPin 0.1
305 TestNetworkPlugins/group/custom-flannel/Start 450.47
306 TestNetworkPlugins/group/calico/ControllerPod 6.01
307 TestNetworkPlugins/group/calico/KubeletFlags 0.16
308 TestNetworkPlugins/group/calico/NetCatPod 11.13
309 TestNetworkPlugins/group/calico/DNS 0.12
310 TestNetworkPlugins/group/calico/Localhost 0.1
311 TestNetworkPlugins/group/calico/HairPin 0.1
312 TestNetworkPlugins/group/false/Start 270.81
313 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.15
314 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.13
315 TestNetworkPlugins/group/custom-flannel/DNS 0.12
316 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
317 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
318 TestNetworkPlugins/group/false/KubeletFlags 0.16
319 TestNetworkPlugins/group/false/NetCatPod 12.13
320 TestNetworkPlugins/group/false/DNS 0.12
321 TestNetworkPlugins/group/false/Localhost 0.1
322 TestNetworkPlugins/group/false/HairPin 0.1
323 TestNetworkPlugins/group/kindnet/Start 73.03
324 TestNetworkPlugins/group/flannel/Start 182.82
325 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
326 TestNetworkPlugins/group/kindnet/KubeletFlags 0.15
327 TestNetworkPlugins/group/kindnet/NetCatPod 11.13
328 TestNetworkPlugins/group/kindnet/DNS 0.13
329 TestNetworkPlugins/group/kindnet/Localhost 0.11
330 TestNetworkPlugins/group/kindnet/HairPin 0.1
331 TestNetworkPlugins/group/enable-default-cni/Start 52.4
332 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.16
333 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.13
334 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
335 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
336 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
337 TestNetworkPlugins/group/bridge/Start 170.01
338 TestNetworkPlugins/group/flannel/ControllerPod 6.01
339 TestNetworkPlugins/group/flannel/KubeletFlags 0.15
340 TestNetworkPlugins/group/flannel/NetCatPod 10.14
341 TestNetworkPlugins/group/flannel/DNS 0.12
342 TestNetworkPlugins/group/flannel/Localhost 0.11
343 TestNetworkPlugins/group/flannel/HairPin 0.1
344 TestNetworkPlugins/group/kubenet/Start 52.31
345 TestNetworkPlugins/group/kubenet/KubeletFlags 0.15
346 TestNetworkPlugins/group/kubenet/NetCatPod 11.13
347 TestNetworkPlugins/group/kubenet/DNS 33.58
348 TestNetworkPlugins/group/kubenet/Localhost 0.11
349 TestNetworkPlugins/group/kubenet/HairPin 0.11
351 TestStartStop/group/old-k8s-version/serial/FirstStart 146.59
352 TestNetworkPlugins/group/bridge/KubeletFlags 0.16
353 TestNetworkPlugins/group/bridge/NetCatPod 11.13
354 TestNetworkPlugins/group/bridge/DNS 0.13
355 TestNetworkPlugins/group/bridge/Localhost 0.1
356 TestNetworkPlugins/group/bridge/HairPin 0.1
358 TestStartStop/group/no-preload/serial/FirstStart 91.49
359 TestStartStop/group/no-preload/serial/DeployApp 8.2
360 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.75
361 TestStartStop/group/no-preload/serial/Stop 8.45
362 TestStartStop/group/old-k8s-version/serial/DeployApp 9.28
363 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.32
364 TestStartStop/group/no-preload/serial/SecondStart 289.55
365 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.7
366 TestStartStop/group/old-k8s-version/serial/Stop 8.4
367 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
368 TestStartStop/group/old-k8s-version/serial/SecondStart 380.11
369 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
370 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
371 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.16
372 TestStartStop/group/no-preload/serial/Pause 1.97
374 TestStartStop/group/embed-certs/serial/FirstStart 52.39
375 TestStartStop/group/embed-certs/serial/DeployApp 9.2
376 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.74
377 TestStartStop/group/embed-certs/serial/Stop 8.43
378 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
379 TestStartStop/group/embed-certs/serial/SecondStart 309.26
380 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
381 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
382 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.16
383 TestStartStop/group/old-k8s-version/serial/Pause 1.89
385 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.86
386 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.21
387 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.78
388 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.43
389 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.32
390 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 308.24
391 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.01
392 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
393 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.16
394 TestStartStop/group/embed-certs/serial/Pause 1.99
396 TestStartStop/group/newest-cni/serial/FirstStart 41.66
397 TestStartStop/group/newest-cni/serial/DeployApp 0
398 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.74
399 TestStartStop/group/newest-cni/serial/Stop 8.43
400 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.39
401 TestStartStop/group/newest-cni/serial/SecondStart 29.96
402 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.01
403 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
404 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
405 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.17
406 TestStartStop/group/newest-cni/serial/Pause 1.9
407 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
408 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.16
409 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.96
x
+
TestDownloadOnly/v1.20.0/json-events (13.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-192000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-192000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (13.861278186s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (13.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-192000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-192000: exit status 85 (295.858446ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-192000 | jenkins | v1.33.1 | 17 Jul 24 10:10 PDT |          |
	|         | -p download-only-192000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 10:10:42
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 10:10:42.699048    1641 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:10:42.699251    1641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:10:42.699256    1641 out.go:304] Setting ErrFile to fd 2...
	I0717 10:10:42.699259    1641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:10:42.699453    1641 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	W0717 10:10:42.699563    1641 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19283-1099/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19283-1099/.minikube/config/config.json: no such file or directory
	I0717 10:10:42.701399    1641 out.go:298] Setting JSON to true
	I0717 10:10:42.724939    1641 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":613,"bootTime":1721235629,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0717 10:10:42.725035    1641 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:10:42.748745    1641 out.go:97] [download-only-192000] minikube v1.33.1 on Darwin 14.5
	I0717 10:10:42.748951    1641 notify.go:220] Checking for updates...
	W0717 10:10:42.748943    1641 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 10:10:42.769444    1641 out.go:169] MINIKUBE_LOCATION=19283
	I0717 10:10:42.790682    1641 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:10:42.819162    1641 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 10:10:42.839426    1641 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:10:42.860233    1641 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	W0717 10:10:42.902347    1641 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 10:10:42.902810    1641 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:10:42.952040    1641 out.go:97] Using the hyperkit driver based on user configuration
	I0717 10:10:42.952098    1641 start.go:297] selected driver: hyperkit
	I0717 10:10:42.952111    1641 start.go:901] validating driver "hyperkit" against <nil>
	I0717 10:10:42.952341    1641 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:10:42.952699    1641 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19283-1099/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0717 10:10:43.363424    1641 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0717 10:10:43.368337    1641 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:10:43.368357    1641 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0717 10:10:43.368386    1641 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 10:10:43.372616    1641 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0717 10:10:43.373170    1641 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 10:10:43.373196    1641 cni.go:84] Creating CNI manager for ""
	I0717 10:10:43.373213    1641 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0717 10:10:43.373285    1641 start.go:340] cluster config:
	{Name:download-only-192000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-192000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:10:43.373508    1641 iso.go:125] acquiring lock: {Name:mkf51f842bcc8a77e9c7c50d642c4c76848e96af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:10:43.394353    1641 out.go:97] Downloading VM boot image ...
	I0717 10:10:43.394430    1641 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 10:10:47.914649    1641 out.go:97] Starting "download-only-192000" primary control-plane node in "download-only-192000" cluster
	I0717 10:10:47.914679    1641 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 10:10:47.970777    1641 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0717 10:10:47.970811    1641 cache.go:56] Caching tarball of preloaded images
	I0717 10:10:47.971195    1641 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 10:10:47.992765    1641 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0717 10:10:47.992810    1641 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0717 10:10:48.069101    1641 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-192000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-192000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-192000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (7.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-036000 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-036000 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=hyperkit : (7.168261604s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (7.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
--- PASS: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-036000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-036000: exit status 85 (309.974329ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-192000 | jenkins | v1.33.1 | 17 Jul 24 10:10 PDT |                     |
	|         | -p download-only-192000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:10 PDT | 17 Jul 24 10:10 PDT |
	| delete  | -p download-only-192000        | download-only-192000 | jenkins | v1.33.1 | 17 Jul 24 10:10 PDT | 17 Jul 24 10:10 PDT |
	| start   | -o=json --download-only        | download-only-036000 | jenkins | v1.33.1 | 17 Jul 24 10:10 PDT |                     |
	|         | -p download-only-036000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 10:10:57
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 10:10:57.300434    1666 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:10:57.301045    1666 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:10:57.301054    1666 out.go:304] Setting ErrFile to fd 2...
	I0717 10:10:57.301061    1666 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:10:57.301551    1666 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	I0717 10:10:57.303027    1666 out.go:298] Setting JSON to true
	I0717 10:10:57.328544    1666 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":628,"bootTime":1721235629,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0717 10:10:57.328636    1666 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:10:57.349571    1666 out.go:97] [download-only-036000] minikube v1.33.1 on Darwin 14.5
	I0717 10:10:57.349788    1666 notify.go:220] Checking for updates...
	I0717 10:10:57.371589    1666 out.go:169] MINIKUBE_LOCATION=19283
	I0717 10:10:57.393608    1666 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:10:57.414714    1666 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 10:10:57.435723    1666 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:10:57.456612    1666 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	W0717 10:10:57.498665    1666 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 10:10:57.499065    1666 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:10:57.529445    1666 out.go:97] Using the hyperkit driver based on user configuration
	I0717 10:10:57.529521    1666 start.go:297] selected driver: hyperkit
	I0717 10:10:57.529545    1666 start.go:901] validating driver "hyperkit" against <nil>
	I0717 10:10:57.529772    1666 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:10:57.529984    1666 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19283-1099/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0717 10:10:57.540580    1666 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0717 10:10:57.545388    1666 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:10:57.545409    1666 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0717 10:10:57.545433    1666 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 10:10:57.548291    1666 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0717 10:10:57.548440    1666 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 10:10:57.548489    1666 cni.go:84] Creating CNI manager for ""
	I0717 10:10:57.548506    1666 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 10:10:57.548516    1666 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 10:10:57.548577    1666 start.go:340] cluster config:
	{Name:download-only-036000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-036000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:10:57.548670    1666 iso.go:125] acquiring lock: {Name:mkf51f842bcc8a77e9c7c50d642c4c76848e96af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:10:57.569709    1666 out.go:97] Starting "download-only-036000" primary control-plane node in "download-only-036000" cluster
	I0717 10:10:57.569743    1666 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:10:57.624953    1666 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0717 10:10:57.625041    1666 cache.go:56] Caching tarball of preloaded images
	I0717 10:10:57.625516    1666 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:10:57.647454    1666 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0717 10:10:57.647471    1666 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 ...
	I0717 10:10:57.720486    1666 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4?checksum=md5:f94875995e68df9a8856f3277eea0126 -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0717 10:11:02.061223    1666 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 ...
	I0717 10:11:02.061404    1666 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-036000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-036000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-036000
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (7.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-587000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-587000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperkit : (7.51050263s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (7.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-587000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-587000: exit status 85 (290.621064ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-192000 | jenkins | v1.33.1 | 17 Jul 24 10:10 PDT |                     |
	|         | -p download-only-192000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:10 PDT | 17 Jul 24 10:10 PDT |
	| delete  | -p download-only-192000             | download-only-192000 | jenkins | v1.33.1 | 17 Jul 24 10:10 PDT | 17 Jul 24 10:10 PDT |
	| start   | -o=json --download-only             | download-only-036000 | jenkins | v1.33.1 | 17 Jul 24 10:10 PDT |                     |
	|         | -p download-only-036000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:11 PDT | 17 Jul 24 10:11 PDT |
	| delete  | -p download-only-036000             | download-only-036000 | jenkins | v1.33.1 | 17 Jul 24 10:11 PDT | 17 Jul 24 10:11 PDT |
	| start   | -o=json --download-only             | download-only-587000 | jenkins | v1.33.1 | 17 Jul 24 10:11 PDT |                     |
	|         | -p download-only-587000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 10:11:05
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 10:11:05.220427    1692 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:11:05.220605    1692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:11:05.220611    1692 out.go:304] Setting ErrFile to fd 2...
	I0717 10:11:05.220615    1692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:11:05.220790    1692 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	I0717 10:11:05.222269    1692 out.go:298] Setting JSON to true
	I0717 10:11:05.245315    1692 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":636,"bootTime":1721235629,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0717 10:11:05.245403    1692 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:11:05.266562    1692 out.go:97] [download-only-587000] minikube v1.33.1 on Darwin 14.5
	I0717 10:11:05.266783    1692 notify.go:220] Checking for updates...
	I0717 10:11:05.288156    1692 out.go:169] MINIKUBE_LOCATION=19283
	I0717 10:11:05.309316    1692 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:11:05.330439    1692 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 10:11:05.351723    1692 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:11:05.374537    1692 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	W0717 10:11:05.416398    1692 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 10:11:05.416899    1692 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:11:05.446244    1692 out.go:97] Using the hyperkit driver based on user configuration
	I0717 10:11:05.446298    1692 start.go:297] selected driver: hyperkit
	I0717 10:11:05.446310    1692 start.go:901] validating driver "hyperkit" against <nil>
	I0717 10:11:05.446525    1692 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:11:05.446745    1692 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19283-1099/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0717 10:11:05.456854    1692 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0717 10:11:05.460711    1692 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:11:05.460736    1692 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0717 10:11:05.460765    1692 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 10:11:05.463449    1692 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0717 10:11:05.463597    1692 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 10:11:05.463622    1692 cni.go:84] Creating CNI manager for ""
	I0717 10:11:05.463638    1692 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 10:11:05.463647    1692 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 10:11:05.463728    1692 start.go:340] cluster config:
	{Name:download-only-587000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:11:05.463815    1692 iso.go:125] acquiring lock: {Name:mkf51f842bcc8a77e9c7c50d642c4c76848e96af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:11:05.485353    1692 out.go:97] Starting "download-only-587000" primary control-plane node in "download-only-587000" cluster
	I0717 10:11:05.485397    1692 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0717 10:11:05.542284    1692 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0717 10:11:05.542353    1692 cache.go:56] Caching tarball of preloaded images
	I0717 10:11:05.542727    1692 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0717 10:11:05.564576    1692 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0717 10:11:05.564603    1692 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0717 10:11:05.638293    1692 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:181d3c061f7abe363e688bf9ac3c9580 -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0717 10:11:09.936141    1692 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0717 10:11:09.936336    1692 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-587000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-587000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-587000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestBinaryMirror (0.94s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-630000 --alsologtostderr --binary-mirror http://127.0.0.1:49542 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-630000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-630000
--- PASS: TestBinaryMirror (0.94s)

                                                
                                    
x
+
TestOffline (99.24s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-382000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-382000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (1m33.955418085s)
helpers_test.go:175: Cleaning up "offline-docker-382000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-382000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-382000: (5.282044473s)
--- PASS: TestOffline (99.24s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-401000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-401000: exit status 85 (210.087259ms)

                                                
                                                
-- stdout --
	* Profile "addons-401000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-401000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-401000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-401000: exit status 85 (188.354713ms)

                                                
                                                
-- stdout --
	* Profile "addons-401000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-401000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (212.79s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-401000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-401000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m32.786453924s)
--- PASS: TestAddons/Setup (212.79s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 7.943889ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-4kcws" [fc618d5d-ba8a-4648-bcc5-480d9e8f8440] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002385876s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-726x5" [cd9813b6-c531-4724-8745-2477d75f4a4b] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002937928s
addons_test.go:342: (dbg) Run:  kubectl --context addons-401000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-401000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-401000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.922554032s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p addons-401000 ip
2024/07/17 10:15:03 [DEBUG] GET http://192.169.0.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p addons-401000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.60s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-401000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-401000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-401000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [81c50fb2-677e-4c80-887f-f01505a5d484] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [81c50fb2-677e-4c80-887f-f01505a5d484] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.002204899s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-401000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-401000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 -p addons-401000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p addons-401000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p addons-401000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p addons-401000 addons disable ingress --alsologtostderr -v=1: (7.463659439s)
--- PASS: TestAddons/parallel/Ingress (20.23s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.53s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6sfq5" [20620aa9-de3a-4fa4-a366-99798eea7745] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.002150989s
addons_test.go:843: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-401000
addons_test.go:843: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-401000: (5.527581581s)
--- PASS: TestAddons/parallel/InspektorGadget (11.53s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.73538ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-sh29s" [2f8784b7-232d-44c4-a9af-0604b7bc55d2] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008969941s
addons_test.go:417: (dbg) Run:  kubectl --context addons-401000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-401000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.51s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.93s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.889714ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-mgmxg" [df8edc95-ba3c-4aaa-bd44-768960992333] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.002895906s
addons_test.go:475: (dbg) Run:  kubectl --context addons-401000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-401000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.493663173s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-401000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.93s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 4.292219ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-401000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-401000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c013bb6f-32bd-433b-9f41-3f1c40d56617] Pending
helpers_test.go:344: "task-pv-pod" [c013bb6f-32bd-433b-9f41-3f1c40d56617] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c013bb6f-32bd-433b-9f41-3f1c40d56617] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.002556881s
addons_test.go:586: (dbg) Run:  kubectl --context addons-401000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-401000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-401000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-401000 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-401000 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-401000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-401000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [70b4faf2-be24-40f5-bff0-50d1428a4ad7] Pending
helpers_test.go:344: "task-pv-pod-restore" [70b4faf2-be24-40f5-bff0-50d1428a4ad7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [70b4faf2-be24-40f5-bff0-50d1428a4ad7] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003244696s
addons_test.go:628: (dbg) Run:  kubectl --context addons-401000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-401000 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-401000 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-darwin-amd64 -p addons-401000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-darwin-amd64 -p addons-401000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.502665685s)
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-401000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (42.51s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-401000 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-pggj2" [f8d24bf8-54ab-4db2-9a10-7328721a41ac] Pending
helpers_test.go:344: "headlamp-7867546754-pggj2" [f8d24bf8-54ab-4db2-9a10-7328721a41ac] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-pggj2" [f8d24bf8-54ab-4db2-9a10-7328721a41ac] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.002395945s
--- PASS: TestAddons/parallel/Headlamp (12.96s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-gn92h" [5eee7dca-3bef-4855-a42b-c397281a4181] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003482543s
addons_test.go:862: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-401000
--- PASS: TestAddons/parallel/CloudSpanner (6.41s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.5s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-401000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-401000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [57d21efb-cc19-4e0e-b5b5-f7c3ca13801f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [57d21efb-cc19-4e0e-b5b5-f7c3ca13801f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [57d21efb-cc19-4e0e-b5b5-f7c3ca13801f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.002128473s
addons_test.go:992: (dbg) Run:  kubectl --context addons-401000 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-darwin-amd64 -p addons-401000 ssh "cat /opt/local-path-provisioner/pvc-db7875b3-08b5-4104-aee5-f4156014a8c9_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-401000 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-401000 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-darwin-amd64 -p addons-401000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-darwin-amd64 -p addons-401000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.773395604s)
--- PASS: TestAddons/parallel/LocalPath (58.50s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.35s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4x586" [7e52f2f3-e210-46ad-a50a-baf9bf203db2] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.002909161s
addons_test.go:1056: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-401000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.35s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-wkmkr" [2e29b074-f57e-43e0-9e38-e082712655bf] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.002320176s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (40.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:889: volcano-scheduler stabilized in 1.847596ms
addons_test.go:905: volcano-controller stabilized in 1.908581ms
addons_test.go:897: volcano-admission stabilized in 2.248751ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-z7d5d" [9d510c9e-ca99-4053-9a44-05be9aac4ac8] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.002608012s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-dgm5b" [f91cc77a-9da5-4ea3-8b57-763cc7013625] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.002079156s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-g4khz" [bcbf3712-752c-4828-849b-51b1a4402980] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.002710233s
addons_test.go:924: (dbg) Run:  kubectl --context addons-401000 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-401000 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-401000 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [440885d3-25fd-4502-b4c1-9d3f2f6cc327] Pending
helpers_test.go:344: "test-job-nginx-0" [440885d3-25fd-4502-b4c1-9d3f2f6cc327] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [440885d3-25fd-4502-b4c1-9d3f2f6cc327] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 15.002530278s
addons_test.go:960: (dbg) Run:  out/minikube-darwin-amd64 -p addons-401000 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-darwin-amd64 -p addons-401000 addons disable volcano --alsologtostderr -v=1: (9.918843793s)
--- PASS: TestAddons/parallel/Volcano (40.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-401000 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-401000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.92s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-401000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-401000: (5.378103683s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-401000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-401000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-401000
--- PASS: TestAddons/StoppedEnableDisable (5.92s)

                                                
                                    
x
+
TestCertOptions (53.09s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-881000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-881000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (47.493477007s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-881000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-881000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-881000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-881000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-881000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-881000: (5.256630772s)
--- PASS: TestCertOptions (53.09s)

                                                
                                    
x
+
TestCertExpiration (261.34s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-637000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-637000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (35.191776569s)
E0717 11:13:50.159043    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-637000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-637000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (40.877689484s)
helpers_test.go:175: Cleaning up "cert-expiration-637000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-637000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-637000: (5.269107828s)
--- PASS: TestCertExpiration (261.34s)

                                                
                                    
x
+
TestDockerFlags (39.64s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-933000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-933000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (35.899615394s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-933000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-933000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-933000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-933000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-933000: (3.425487061s)
--- PASS: TestDockerFlags (39.64s)

                                                
                                    
x
+
TestForceSystemdFlag (38.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-093000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-093000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (34.542526489s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-093000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-093000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-093000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-093000: (3.353413476s)
--- PASS: TestForceSystemdFlag (38.05s)

                                                
                                    
x
+
TestForceSystemdEnv (156.73s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-705000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-705000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (2m31.320743811s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-705000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-705000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-705000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-705000: (5.240307151s)
--- PASS: TestForceSystemdEnv (156.73s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.01s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.01s)

                                                
                                    
x
+
TestErrorSpam/setup (37.22s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-636000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-636000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-636000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-636000 --driver=hyperkit : (37.223300449s)
--- PASS: TestErrorSpam/setup (37.22s)

                                                
                                    
x
+
TestErrorSpam/start (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-636000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-636000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-636000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-636000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-636000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-636000 start --dry-run
--- PASS: TestErrorSpam/start (1.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.5s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-636000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-636000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-636000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-636000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-636000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-636000 status
--- PASS: TestErrorSpam/status (0.50s)

                                                
                                    
x
+
TestErrorSpam/pause (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-636000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-636000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-636000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-636000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-636000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-636000 pause
--- PASS: TestErrorSpam/pause (1.38s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-636000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-636000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-636000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-636000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-636000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-636000 unpause
--- PASS: TestErrorSpam/unpause (1.36s)

                                                
                                    
x
+
TestErrorSpam/stop (155.85s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-636000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-636000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-636000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-636000 stop: (5.393754775s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-636000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-636000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-636000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-636000 stop: (1m15.234701735s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-636000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-636000 stop
E0717 10:19:48.071383    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 10:19:48.079748    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 10:19:48.090359    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 10:19:48.110532    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 10:19:48.151605    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 10:19:48.231778    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 10:19:48.392205    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 10:19:48.712514    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 10:19:49.352764    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 10:19:50.632890    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 10:19:53.193058    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 10:19:58.313318    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 10:20:08.553627    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 10:20:29.036286    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-636000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-636000 stop: (1m15.217034485s)
--- PASS: TestErrorSpam/stop (155.85s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/test/nested/copy/1639/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (93.43s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-325000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0717 10:21:09.997807    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-325000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (1m33.429430377s)
--- PASS: TestFunctional/serial/StartWithProxy (93.43s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.3s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-325000 --alsologtostderr -v=8
E0717 10:22:31.919608    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-325000 --alsologtostderr -v=8: (41.30255865s)
functional_test.go:659: soft start took 41.303021991s for "functional-325000" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.30s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-325000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-325000 cache add registry.k8s.io/pause:3.1: (1.135540924s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-325000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local2172936789/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 cache add minikube-local-cache-test:functional-325000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 cache delete minikube-local-cache-test:functional-325000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-325000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-325000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (149.374052ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 kubectl -- --context functional-325000 get pods
functional_test.go:712: (dbg) Done: out/minikube-darwin-amd64 -p functional-325000 kubectl -- --context functional-325000 get pods: (1.133705029s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-325000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-325000 get pods: (1.44447006s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.45s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.22s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-325000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-325000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.21464863s)
functional_test.go:757: restart took 41.214813952s for "functional-325000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.22s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-325000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-325000 logs: (2.634067472s)
--- PASS: TestFunctional/serial/LogsCmd (2.63s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd1048496044/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-325000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd1048496044/001/logs.txt: (2.708185332s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.71s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.15s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-325000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-325000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-325000: exit status 115 (267.821057ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:30303 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-325000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.15s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-325000 config get cpus: exit status 14 (74.888169ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-325000 config get cpus: exit status 14 (55.602838ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-325000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-325000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2868: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-325000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-325000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (667.732767ms)

                                                
                                                
-- stdout --
	* [functional-325000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:24:44.536536    2825 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:24:44.536823    2825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:24:44.536829    2825 out.go:304] Setting ErrFile to fd 2...
	I0717 10:24:44.536832    2825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:24:44.537056    2825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	I0717 10:24:44.538686    2825 out.go:298] Setting JSON to false
	I0717 10:24:44.562445    2825 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1455,"bootTime":1721235629,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0717 10:24:44.562543    2825 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:24:44.584994    2825 out.go:177] * [functional-325000] minikube v1.33.1 on Darwin 14.5
	I0717 10:24:44.626684    2825 notify.go:220] Checking for updates...
	I0717 10:24:44.647663    2825 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:24:44.669648    2825 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:24:44.711700    2825 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 10:24:44.753689    2825 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:24:44.795506    2825 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	I0717 10:24:44.858563    2825 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:24:44.880379    2825 config.go:182] Loaded profile config "functional-325000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:24:44.881176    2825 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:24:44.881231    2825 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:24:44.890765    2825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50773
	I0717 10:24:44.891132    2825 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:24:44.891554    2825 main.go:141] libmachine: Using API Version  1
	I0717 10:24:44.891563    2825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:24:44.891826    2825 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:24:44.891949    2825 main.go:141] libmachine: (functional-325000) Calling .DriverName
	I0717 10:24:44.892157    2825 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:24:44.892409    2825 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:24:44.892434    2825 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:24:44.900883    2825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50775
	I0717 10:24:44.901239    2825 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:24:44.901598    2825 main.go:141] libmachine: Using API Version  1
	I0717 10:24:44.901615    2825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:24:44.901843    2825 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:24:44.901958    2825 main.go:141] libmachine: (functional-325000) Calling .DriverName
	I0717 10:24:44.963425    2825 out.go:177] * Using the hyperkit driver based on existing profile
	I0717 10:24:44.984565    2825 start.go:297] selected driver: hyperkit
	I0717 10:24:44.984591    2825 start.go:901] validating driver "hyperkit" against &{Name:functional-325000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.2 ClusterName:functional-325000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:24:44.984805    2825 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:24:45.026418    2825 out.go:177] 
	W0717 10:24:45.063665    2825 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 10:24:45.126533    2825 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-325000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-325000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-325000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (724.469498ms)

                                                
                                                
-- stdout --
	* [functional-325000] minikube v1.33.1 sur Darwin 14.5
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:24:44.626696    2829 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:24:44.627264    2829 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:24:44.627272    2829 out.go:304] Setting ErrFile to fd 2...
	I0717 10:24:44.627276    2829 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:24:44.627504    2829 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	I0717 10:24:44.648151    2829 out.go:298] Setting JSON to false
	I0717 10:24:44.671925    2829 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1455,"bootTime":1721235629,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0717 10:24:44.672032    2829 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:24:44.711695    2829 out.go:177] * [functional-325000] minikube v1.33.1 sur Darwin 14.5
	I0717 10:24:44.753698    2829 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:24:44.753818    2829 notify.go:220] Checking for updates...
	I0717 10:24:44.816653    2829 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	I0717 10:24:44.879594    2829 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 10:24:44.900472    2829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:24:44.963409    2829 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	I0717 10:24:44.984620    2829 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:24:45.006487    2829 config.go:182] Loaded profile config "functional-325000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:24:45.007193    2829 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:24:45.007270    2829 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:24:45.016935    2829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50777
	I0717 10:24:45.017320    2829 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:24:45.017750    2829 main.go:141] libmachine: Using API Version  1
	I0717 10:24:45.017761    2829 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:24:45.018041    2829 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:24:45.018151    2829 main.go:141] libmachine: (functional-325000) Calling .DriverName
	I0717 10:24:45.018338    2829 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:24:45.018586    2829 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:24:45.018614    2829 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:24:45.027065    2829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50779
	I0717 10:24:45.027408    2829 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:24:45.027755    2829 main.go:141] libmachine: Using API Version  1
	I0717 10:24:45.027776    2829 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:24:45.027987    2829 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:24:45.028104    2829 main.go:141] libmachine: (functional-325000) Calling .DriverName
	I0717 10:24:45.126537    2829 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0717 10:24:45.184541    2829 start.go:297] selected driver: hyperkit
	I0717 10:24:45.184556    2829 start.go:901] validating driver "hyperkit" against &{Name:functional-325000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.2 ClusterName:functional-325000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:24:45.184682    2829 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:24:45.208728    2829 out.go:177] 
	W0717 10:24:45.231811    2829 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 10:24:45.252761    2829 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-325000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-325000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-5stcl" [bbbd01db-3ca8-4acf-8976-a146b4f7ff58] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-5stcl" [bbbd01db-3ca8-4acf-8976-a146b4f7ff58] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.00426089s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.169.0.4:30370
functional_test.go:1671: http://192.169.0.4:30370: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-5stcl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:30370
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [33078389-1467-410a-96d8-2796fdf26759] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00592917s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-325000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-325000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-325000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-325000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ef6b4e0f-77d3-47a3-9ed4-eab03112b1cf] Pending
helpers_test.go:344: "sp-pod" [ef6b4e0f-77d3-47a3-9ed4-eab03112b1cf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ef6b4e0f-77d3-47a3-9ed4-eab03112b1cf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004087211s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-325000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-325000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-325000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [900ca5c9-1c47-4d30-a448-d82594593109] Pending
helpers_test.go:344: "sp-pod" [900ca5c9-1c47-4d30-a448-d82594593109] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [900ca5c9-1c47-4d30-a448-d82594593109] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003879056s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-325000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.33s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh -n functional-325000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 cp functional-325000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd2983181071/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh -n functional-325000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh -n functional-325000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-325000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-f2mhh" [9bf80c8b-dd08-4d7d-9df2-1c695026bf95] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-f2mhh" [9bf80c8b-dd08-4d7d-9df2-1c695026bf95] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.006361692s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-325000 exec mysql-64454c8b5c-f2mhh -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-325000 exec mysql-64454c8b5c-f2mhh -- mysql -ppassword -e "show databases;": exit status 1 (157.596429ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-325000 exec mysql-64454c8b5c-f2mhh -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-325000 exec mysql-64454c8b5c-f2mhh -- mysql -ppassword -e "show databases;": exit status 1 (155.132455ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-325000 exec mysql-64454c8b5c-f2mhh -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-325000 exec mysql-64454c8b5c-f2mhh -- mysql -ppassword -e "show databases;": exit status 1 (133.334349ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-325000 exec mysql-64454c8b5c-f2mhh -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.92s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1639/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "sudo cat /etc/test/nested/copy/1639/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1639.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "sudo cat /etc/ssl/certs/1639.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1639.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "sudo cat /usr/share/ca-certificates/1639.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/16392.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "sudo cat /etc/ssl/certs/16392.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/16392.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "sudo cat /usr/share/ca-certificates/16392.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-325000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-325000 ssh "sudo systemctl is-active crio": exit status 1 (142.192887ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-325000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-325000
docker.io/kicbase/echo-server:functional-325000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-325000 image ls --format short --alsologtostderr:
I0717 10:24:47.130964    2874 out.go:291] Setting OutFile to fd 1 ...
I0717 10:24:47.131182    2874 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:24:47.131188    2874 out.go:304] Setting ErrFile to fd 2...
I0717 10:24:47.131192    2874 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:24:47.131378    2874 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
I0717 10:24:47.131978    2874 config.go:182] Loaded profile config "functional-325000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:24:47.132078    2874 config.go:182] Loaded profile config "functional-325000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:24:47.132418    2874 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0717 10:24:47.132468    2874 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0717 10:24:47.141010    2874 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50833
I0717 10:24:47.141467    2874 main.go:141] libmachine: () Calling .GetVersion
I0717 10:24:47.141921    2874 main.go:141] libmachine: Using API Version  1
I0717 10:24:47.141932    2874 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 10:24:47.142189    2874 main.go:141] libmachine: () Calling .GetMachineName
I0717 10:24:47.142317    2874 main.go:141] libmachine: (functional-325000) Calling .GetState
I0717 10:24:47.142420    2874 main.go:141] libmachine: (functional-325000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0717 10:24:47.142502    2874 main.go:141] libmachine: (functional-325000) DBG | hyperkit pid from json: 2165
I0717 10:24:47.143860    2874 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0717 10:24:47.143885    2874 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0717 10:24:47.152525    2874 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50835
I0717 10:24:47.152921    2874 main.go:141] libmachine: () Calling .GetVersion
I0717 10:24:47.153277    2874 main.go:141] libmachine: Using API Version  1
I0717 10:24:47.153293    2874 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 10:24:47.153508    2874 main.go:141] libmachine: () Calling .GetMachineName
I0717 10:24:47.153637    2874 main.go:141] libmachine: (functional-325000) Calling .DriverName
I0717 10:24:47.153820    2874 ssh_runner.go:195] Run: systemctl --version
I0717 10:24:47.153841    2874 main.go:141] libmachine: (functional-325000) Calling .GetSSHHostname
I0717 10:24:47.153925    2874 main.go:141] libmachine: (functional-325000) Calling .GetSSHPort
I0717 10:24:47.154005    2874 main.go:141] libmachine: (functional-325000) Calling .GetSSHKeyPath
I0717 10:24:47.154093    2874 main.go:141] libmachine: (functional-325000) Calling .GetSSHUsername
I0717 10:24:47.154184    2874 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/functional-325000/id_rsa Username:docker}
I0717 10:24:47.194575    2874 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0717 10:24:47.219896    2874 main.go:141] libmachine: Making call to close driver server
I0717 10:24:47.219906    2874 main.go:141] libmachine: (functional-325000) Calling .Close
I0717 10:24:47.220056    2874 main.go:141] libmachine: Successfully made call to close driver server
I0717 10:24:47.220067    2874 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 10:24:47.220075    2874 main.go:141] libmachine: Making call to close driver server
I0717 10:24:47.220080    2874 main.go:141] libmachine: (functional-325000) Calling .Close
I0717 10:24:47.220118    2874 main.go:141] libmachine: (functional-325000) DBG | Closing plugin on server side
I0717 10:24:47.220223    2874 main.go:141] libmachine: (functional-325000) DBG | Closing plugin on server side
I0717 10:24:47.220223    2874 main.go:141] libmachine: Successfully made call to close driver server
I0717 10:24:47.220235    2874 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-325000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/localhost/my-image                | functional-325000 | d0b6668854f3a | 1.24MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-proxy                  | v1.30.2           | 53c535741fb44 | 84.7MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kicbase/echo-server               | functional-325000 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-apiserver              | v1.30.2           | 56ce0fd9fb532 | 117MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.2           | e874818b3caac | 111MB  |
| registry.k8s.io/kube-scheduler              | v1.30.2           | 7820c83aa1394 | 62MB   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-325000 | cc9ccbee7b5f9 | 30B    |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/library/nginx                     | latest            | fffffc90d343c | 188MB  |
| docker.io/library/nginx                     | alpine            | 099a2d701db1f | 43.2MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-325000 image ls --format table --alsologtostderr:
I0717 10:24:49.680730    2899 out.go:291] Setting OutFile to fd 1 ...
I0717 10:24:49.680920    2899 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:24:49.680926    2899 out.go:304] Setting ErrFile to fd 2...
I0717 10:24:49.680930    2899 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:24:49.681101    2899 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
I0717 10:24:49.681704    2899 config.go:182] Loaded profile config "functional-325000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:24:49.681808    2899 config.go:182] Loaded profile config "functional-325000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:24:49.682155    2899 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0717 10:24:49.682200    2899 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0717 10:24:49.690662    2899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50872
I0717 10:24:49.691089    2899 main.go:141] libmachine: () Calling .GetVersion
I0717 10:24:49.691506    2899 main.go:141] libmachine: Using API Version  1
I0717 10:24:49.691520    2899 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 10:24:49.691732    2899 main.go:141] libmachine: () Calling .GetMachineName
I0717 10:24:49.691832    2899 main.go:141] libmachine: (functional-325000) Calling .GetState
I0717 10:24:49.691924    2899 main.go:141] libmachine: (functional-325000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0717 10:24:49.691997    2899 main.go:141] libmachine: (functional-325000) DBG | hyperkit pid from json: 2165
I0717 10:24:49.693260    2899 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0717 10:24:49.693292    2899 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0717 10:24:49.701954    2899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50875
I0717 10:24:49.702321    2899 main.go:141] libmachine: () Calling .GetVersion
I0717 10:24:49.702696    2899 main.go:141] libmachine: Using API Version  1
I0717 10:24:49.702710    2899 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 10:24:49.702942    2899 main.go:141] libmachine: () Calling .GetMachineName
I0717 10:24:49.703046    2899 main.go:141] libmachine: (functional-325000) Calling .DriverName
I0717 10:24:49.703208    2899 ssh_runner.go:195] Run: systemctl --version
I0717 10:24:49.703228    2899 main.go:141] libmachine: (functional-325000) Calling .GetSSHHostname
I0717 10:24:49.703305    2899 main.go:141] libmachine: (functional-325000) Calling .GetSSHPort
I0717 10:24:49.703391    2899 main.go:141] libmachine: (functional-325000) Calling .GetSSHKeyPath
I0717 10:24:49.703484    2899 main.go:141] libmachine: (functional-325000) Calling .GetSSHUsername
I0717 10:24:49.703579    2899 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/functional-325000/id_rsa Username:docker}
I0717 10:24:49.740906    2899 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0717 10:24:49.762763    2899 main.go:141] libmachine: Making call to close driver server
I0717 10:24:49.762778    2899 main.go:141] libmachine: (functional-325000) Calling .Close
I0717 10:24:49.762924    2899 main.go:141] libmachine: (functional-325000) DBG | Closing plugin on server side
I0717 10:24:49.762932    2899 main.go:141] libmachine: Successfully made call to close driver server
I0717 10:24:49.762942    2899 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 10:24:49.762952    2899 main.go:141] libmachine: Making call to close driver server
I0717 10:24:49.762962    2899 main.go:141] libmachine: (functional-325000) Calling .Close
I0717 10:24:49.763091    2899 main.go:141] libmachine: Successfully made call to close driver server
I0717 10:24:49.763101    2899 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 10:24:49.763095    2899 main.go:141] libmachine: (functional-325000) DBG | Closing plugin on server side
2024/07/17 10:24:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-325000 image ls --format json --alsologtostderr:
[{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"117000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444
987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-325000"],"size":"4940000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"cc9ccbee7b5f95ca0709955de680ae7a13bc6b9fd59a9b4ae85a94f05b98bc3a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache
-test:functional-325000"],"size":"30"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"111000000"},{"id":"53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"84700000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"d0b6668854f3a802db5f2b4bbd2b6fee03bb225260fb719ef232016a8bce0eae","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-325000"],"size":"1240000"},{"id":"099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"7820c83aa13945352
2e9028341d0d4f23ca2721ec80c7a47425446d11157b940","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"62000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-325000 image ls --format json --alsologtostderr:
I0717 10:24:49.523754    2895 out.go:291] Setting OutFile to fd 1 ...
I0717 10:24:49.524043    2895 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:24:49.524048    2895 out.go:304] Setting ErrFile to fd 2...
I0717 10:24:49.524052    2895 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:24:49.524228    2895 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
I0717 10:24:49.524800    2895 config.go:182] Loaded profile config "functional-325000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:24:49.524899    2895 config.go:182] Loaded profile config "functional-325000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:24:49.525243    2895 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0717 10:24:49.525290    2895 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0717 10:24:49.533611    2895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50867
I0717 10:24:49.534036    2895 main.go:141] libmachine: () Calling .GetVersion
I0717 10:24:49.534472    2895 main.go:141] libmachine: Using API Version  1
I0717 10:24:49.534501    2895 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 10:24:49.534750    2895 main.go:141] libmachine: () Calling .GetMachineName
I0717 10:24:49.534880    2895 main.go:141] libmachine: (functional-325000) Calling .GetState
I0717 10:24:49.534977    2895 main.go:141] libmachine: (functional-325000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0717 10:24:49.535044    2895 main.go:141] libmachine: (functional-325000) DBG | hyperkit pid from json: 2165
I0717 10:24:49.536333    2895 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0717 10:24:49.536363    2895 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0717 10:24:49.544713    2895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50869
I0717 10:24:49.545070    2895 main.go:141] libmachine: () Calling .GetVersion
I0717 10:24:49.545404    2895 main.go:141] libmachine: Using API Version  1
I0717 10:24:49.545417    2895 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 10:24:49.545661    2895 main.go:141] libmachine: () Calling .GetMachineName
I0717 10:24:49.545796    2895 main.go:141] libmachine: (functional-325000) Calling .DriverName
I0717 10:24:49.545954    2895 ssh_runner.go:195] Run: systemctl --version
I0717 10:24:49.545973    2895 main.go:141] libmachine: (functional-325000) Calling .GetSSHHostname
I0717 10:24:49.546058    2895 main.go:141] libmachine: (functional-325000) Calling .GetSSHPort
I0717 10:24:49.546139    2895 main.go:141] libmachine: (functional-325000) Calling .GetSSHKeyPath
I0717 10:24:49.546227    2895 main.go:141] libmachine: (functional-325000) Calling .GetSSHUsername
I0717 10:24:49.546305    2895 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/functional-325000/id_rsa Username:docker}
I0717 10:24:49.581030    2895 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0717 10:24:49.602023    2895 main.go:141] libmachine: Making call to close driver server
I0717 10:24:49.602032    2895 main.go:141] libmachine: (functional-325000) Calling .Close
I0717 10:24:49.602219    2895 main.go:141] libmachine: Successfully made call to close driver server
I0717 10:24:49.602229    2895 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 10:24:49.602238    2895 main.go:141] libmachine: Making call to close driver server
I0717 10:24:49.602244    2895 main.go:141] libmachine: (functional-325000) Calling .Close
I0717 10:24:49.602393    2895 main.go:141] libmachine: (functional-325000) DBG | Closing plugin on server side
I0717 10:24:49.602432    2895 main.go:141] libmachine: Successfully made call to close driver server
I0717 10:24:49.602440    2895 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-325000 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "117000000"
- id: 53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "84700000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "62000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: cc9ccbee7b5f95ca0709955de680ae7a13bc6b9fd59a9b4ae85a94f05b98bc3a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-325000
size: "30"
- id: 099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "111000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-325000
size: "4940000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-325000 image ls --format yaml --alsologtostderr:
I0717 10:24:47.301109    2878 out.go:291] Setting OutFile to fd 1 ...
I0717 10:24:47.301388    2878 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:24:47.301394    2878 out.go:304] Setting ErrFile to fd 2...
I0717 10:24:47.301397    2878 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:24:47.301565    2878 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
I0717 10:24:47.302150    2878 config.go:182] Loaded profile config "functional-325000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:24:47.302250    2878 config.go:182] Loaded profile config "functional-325000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:24:47.302619    2878 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0717 10:24:47.302662    2878 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0717 10:24:47.310777    2878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50840
I0717 10:24:47.311204    2878 main.go:141] libmachine: () Calling .GetVersion
I0717 10:24:47.311622    2878 main.go:141] libmachine: Using API Version  1
I0717 10:24:47.311632    2878 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 10:24:47.311840    2878 main.go:141] libmachine: () Calling .GetMachineName
I0717 10:24:47.311964    2878 main.go:141] libmachine: (functional-325000) Calling .GetState
I0717 10:24:47.312052    2878 main.go:141] libmachine: (functional-325000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0717 10:24:47.312138    2878 main.go:141] libmachine: (functional-325000) DBG | hyperkit pid from json: 2165
I0717 10:24:47.313428    2878 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0717 10:24:47.313452    2878 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0717 10:24:47.321563    2878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50842
I0717 10:24:47.321910    2878 main.go:141] libmachine: () Calling .GetVersion
I0717 10:24:47.322219    2878 main.go:141] libmachine: Using API Version  1
I0717 10:24:47.322230    2878 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 10:24:47.322429    2878 main.go:141] libmachine: () Calling .GetMachineName
I0717 10:24:47.322534    2878 main.go:141] libmachine: (functional-325000) Calling .DriverName
I0717 10:24:47.322736    2878 ssh_runner.go:195] Run: systemctl --version
I0717 10:24:47.322784    2878 main.go:141] libmachine: (functional-325000) Calling .GetSSHHostname
I0717 10:24:47.322884    2878 main.go:141] libmachine: (functional-325000) Calling .GetSSHPort
I0717 10:24:47.322973    2878 main.go:141] libmachine: (functional-325000) Calling .GetSSHKeyPath
I0717 10:24:47.323065    2878 main.go:141] libmachine: (functional-325000) Calling .GetSSHUsername
I0717 10:24:47.323154    2878 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/functional-325000/id_rsa Username:docker}
I0717 10:24:47.357827    2878 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0717 10:24:47.376725    2878 main.go:141] libmachine: Making call to close driver server
I0717 10:24:47.376740    2878 main.go:141] libmachine: (functional-325000) Calling .Close
I0717 10:24:47.376889    2878 main.go:141] libmachine: Successfully made call to close driver server
I0717 10:24:47.376899    2878 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 10:24:47.376907    2878 main.go:141] libmachine: Making call to close driver server
I0717 10:24:47.376910    2878 main.go:141] libmachine: (functional-325000) DBG | Closing plugin on server side
I0717 10:24:47.376915    2878 main.go:141] libmachine: (functional-325000) Calling .Close
I0717 10:24:47.377042    2878 main.go:141] libmachine: (functional-325000) DBG | Closing plugin on server side
I0717 10:24:47.377066    2878 main.go:141] libmachine: Successfully made call to close driver server
I0717 10:24:47.377077    2878 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-325000 ssh pgrep buildkitd: exit status 1 (130.624887ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 image build -t localhost/my-image:functional-325000 testdata/build --alsologtostderr
E0717 10:24:48.076442    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-325000 image build -t localhost/my-image:functional-325000 testdata/build --alsologtostderr: (1.769178276s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-325000 image build -t localhost/my-image:functional-325000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 49496b93d536
---> Removed intermediate container 49496b93d536
---> f4e3858f5a85
Step 3/3 : ADD content.txt /
---> d0b6668854f3
Successfully built d0b6668854f3
Successfully tagged localhost/my-image:functional-325000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-325000 image build -t localhost/my-image:functional-325000 testdata/build --alsologtostderr:
I0717 10:24:47.586954    2887 out.go:291] Setting OutFile to fd 1 ...
I0717 10:24:47.587310    2887 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:24:47.587316    2887 out.go:304] Setting ErrFile to fd 2...
I0717 10:24:47.587320    2887 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:24:47.587497    2887 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
I0717 10:24:47.588079    2887 config.go:182] Loaded profile config "functional-325000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:24:47.589622    2887 config.go:182] Loaded profile config "functional-325000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:24:47.589960    2887 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0717 10:24:47.590000    2887 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0717 10:24:47.598187    2887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50853
I0717 10:24:47.598652    2887 main.go:141] libmachine: () Calling .GetVersion
I0717 10:24:47.599052    2887 main.go:141] libmachine: Using API Version  1
I0717 10:24:47.599062    2887 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 10:24:47.599317    2887 main.go:141] libmachine: () Calling .GetMachineName
I0717 10:24:47.599442    2887 main.go:141] libmachine: (functional-325000) Calling .GetState
I0717 10:24:47.599529    2887 main.go:141] libmachine: (functional-325000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0717 10:24:47.599605    2887 main.go:141] libmachine: (functional-325000) DBG | hyperkit pid from json: 2165
I0717 10:24:47.600883    2887 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0717 10:24:47.600908    2887 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0717 10:24:47.609423    2887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50856
I0717 10:24:47.609806    2887 main.go:141] libmachine: () Calling .GetVersion
I0717 10:24:47.610163    2887 main.go:141] libmachine: Using API Version  1
I0717 10:24:47.610178    2887 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 10:24:47.610438    2887 main.go:141] libmachine: () Calling .GetMachineName
I0717 10:24:47.610572    2887 main.go:141] libmachine: (functional-325000) Calling .DriverName
I0717 10:24:47.610740    2887 ssh_runner.go:195] Run: systemctl --version
I0717 10:24:47.610759    2887 main.go:141] libmachine: (functional-325000) Calling .GetSSHHostname
I0717 10:24:47.610832    2887 main.go:141] libmachine: (functional-325000) Calling .GetSSHPort
I0717 10:24:47.610911    2887 main.go:141] libmachine: (functional-325000) Calling .GetSSHKeyPath
I0717 10:24:47.610986    2887 main.go:141] libmachine: (functional-325000) Calling .GetSSHUsername
I0717 10:24:47.611074    2887 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/functional-325000/id_rsa Username:docker}
I0717 10:24:47.648869    2887 build_images.go:161] Building image from path: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.2873132489.tar
I0717 10:24:47.648944    2887 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 10:24:47.657730    2887 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2873132489.tar
I0717 10:24:47.661351    2887 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2873132489.tar: stat -c "%s %y" /var/lib/minikube/build/build.2873132489.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2873132489.tar': No such file or directory
I0717 10:24:47.661375    2887 ssh_runner.go:362] scp /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.2873132489.tar --> /var/lib/minikube/build/build.2873132489.tar (3072 bytes)
I0717 10:24:47.682424    2887 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2873132489
I0717 10:24:47.690125    2887 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2873132489 -xf /var/lib/minikube/build/build.2873132489.tar
I0717 10:24:47.697387    2887 docker.go:360] Building image: /var/lib/minikube/build/build.2873132489
I0717 10:24:47.697466    2887 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-325000 /var/lib/minikube/build/build.2873132489
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0717 10:24:49.256037    2887 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-325000 /var/lib/minikube/build/build.2873132489: (1.558532823s)
I0717 10:24:49.256105    2887 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2873132489
I0717 10:24:49.264842    2887 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2873132489.tar
I0717 10:24:49.272298    2887 build_images.go:217] Built localhost/my-image:functional-325000 from /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.2873132489.tar
I0717 10:24:49.272324    2887 build_images.go:133] succeeded building to: functional-325000
I0717 10:24:49.272329    2887 build_images.go:134] failed building to: 
I0717 10:24:49.272346    2887 main.go:141] libmachine: Making call to close driver server
I0717 10:24:49.272354    2887 main.go:141] libmachine: (functional-325000) Calling .Close
I0717 10:24:49.272510    2887 main.go:141] libmachine: Successfully made call to close driver server
I0717 10:24:49.272516    2887 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 10:24:49.272520    2887 main.go:141] libmachine: Making call to close driver server
I0717 10:24:49.272534    2887 main.go:141] libmachine: (functional-325000) Calling .Close
I0717 10:24:49.272703    2887 main.go:141] libmachine: (functional-325000) DBG | Closing plugin on server side
I0717 10:24:49.272703    2887 main.go:141] libmachine: Successfully made call to close driver server
I0717 10:24:49.272713    2887 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.793103012s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-325000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-325000 docker-env) && out/minikube-darwin-amd64 status -p functional-325000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-325000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 image load --daemon docker.io/kicbase/echo-server:functional-325000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 image load --daemon docker.io/kicbase/echo-server:functional-325000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-325000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 image load --daemon docker.io/kicbase/echo-server:functional-325000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 image save docker.io/kicbase/echo-server:functional-325000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 image rm docker.io/kicbase/echo-server:functional-325000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-325000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 image save --daemon docker.io/kicbase/echo-server:functional-325000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-325000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (22.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-325000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-325000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-4qqdl" [4fc1a371-ee6f-46d8-84cb-633bfc1408f8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-4qqdl" [4fc1a371-ee6f-46d8-84cb-633bfc1408f8] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 22.004993918s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (22.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-325000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-325000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-325000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2579: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-325000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-325000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-325000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [4b798c36-49b1-4a46-9204-f063efcc4b9d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [4b798c36-49b1-4a46-9204-f063efcc4b9d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.002979235s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 service list -o json
functional_test.go:1490: Took "371.879787ms" to run "out/minikube-darwin-amd64 -p functional-325000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.169.0.4:30744
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.169.0.4:30744
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-325000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.202.241 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-325000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "177.024142ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "78.115853ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "178.380159ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "78.334015ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-325000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port1888212378/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721237074916858000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port1888212378/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721237074916858000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port1888212378/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721237074916858000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port1888212378/001/test-1721237074916858000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-325000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (148.360006ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 17 17:24 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 17 17:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 17 17:24 test-1721237074916858000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh cat /mount-9p/test-1721237074916858000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-325000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8fab0e2d-dc8f-4177-af52-9146f98a2226] Pending
helpers_test.go:344: "busybox-mount" [8fab0e2d-dc8f-4177-af52-9146f98a2226] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8fab0e2d-dc8f-4177-af52-9146f98a2226] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8fab0e2d-dc8f-4177-af52-9146f98a2226] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004517494s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-325000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-325000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port1888212378/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-325000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1483306815/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-325000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (154.527866ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-325000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1483306815/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-325000 ssh "sudo umount -f /mount-9p": exit status 1 (128.110321ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-325000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-325000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1483306815/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-325000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1001892973/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-325000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1001892973/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-325000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1001892973/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-325000 ssh "findmnt -T" /mount1: exit status 1 (178.721375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-325000 ssh "findmnt -T" /mount1: exit status 1 (173.274124ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-325000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-325000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-325000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1001892973/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-325000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1001892973/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-325000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1001892973/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.95s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-325000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-325000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-325000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (312.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-572000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
E0717 10:25:15.763440    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 10:28:50.012062    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 10:28:50.017948    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 10:28:50.028491    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 10:28:50.048727    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 10:28:50.088849    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 10:28:50.170124    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 10:28:50.331632    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 10:28:50.652863    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 10:28:51.293578    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 10:28:52.575128    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 10:28:55.135500    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 10:29:00.255723    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 10:29:10.496141    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 10:29:30.977054    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 10:29:48.081491    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 10:30:11.937877    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-572000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (5m12.517739504s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (312.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-572000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-572000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-572000 -- rollout status deployment/busybox: (3.453014942s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-572000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-572000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-572000 -- exec busybox-fc5497c4f-5r4wl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-572000 -- exec busybox-fc5497c4f-9sdw5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-572000 -- exec busybox-fc5497c4f-jhz2d -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-572000 -- exec busybox-fc5497c4f-5r4wl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-572000 -- exec busybox-fc5497c4f-9sdw5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-572000 -- exec busybox-fc5497c4f-jhz2d -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-572000 -- exec busybox-fc5497c4f-5r4wl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-572000 -- exec busybox-fc5497c4f-9sdw5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-572000 -- exec busybox-fc5497c4f-jhz2d -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-572000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-572000 -- exec busybox-fc5497c4f-5r4wl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-572000 -- exec busybox-fc5497c4f-5r4wl -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-572000 -- exec busybox-fc5497c4f-9sdw5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-572000 -- exec busybox-fc5497c4f-9sdw5 -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-572000 -- exec busybox-fc5497c4f-jhz2d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-572000 -- exec busybox-fc5497c4f-jhz2d -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (48.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-572000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-572000 -v=7 --alsologtostderr: (48.537785035s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (48.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-572000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (2.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (2.869719316s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (2.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (9.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 cp testdata/cp-test.txt ha-572000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 cp ha-572000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3497274976/001/cp-test_ha-572000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 cp ha-572000:/home/docker/cp-test.txt ha-572000-m02:/home/docker/cp-test_ha-572000_ha-572000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m02 "sudo cat /home/docker/cp-test_ha-572000_ha-572000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 cp ha-572000:/home/docker/cp-test.txt ha-572000-m03:/home/docker/cp-test_ha-572000_ha-572000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m03 "sudo cat /home/docker/cp-test_ha-572000_ha-572000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 cp ha-572000:/home/docker/cp-test.txt ha-572000-m04:/home/docker/cp-test_ha-572000_ha-572000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m04 "sudo cat /home/docker/cp-test_ha-572000_ha-572000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 cp testdata/cp-test.txt ha-572000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 cp ha-572000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3497274976/001/cp-test_ha-572000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 cp ha-572000-m02:/home/docker/cp-test.txt ha-572000:/home/docker/cp-test_ha-572000-m02_ha-572000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000 "sudo cat /home/docker/cp-test_ha-572000-m02_ha-572000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 cp ha-572000-m02:/home/docker/cp-test.txt ha-572000-m03:/home/docker/cp-test_ha-572000-m02_ha-572000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m03 "sudo cat /home/docker/cp-test_ha-572000-m02_ha-572000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 cp ha-572000-m02:/home/docker/cp-test.txt ha-572000-m04:/home/docker/cp-test_ha-572000-m02_ha-572000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m04 "sudo cat /home/docker/cp-test_ha-572000-m02_ha-572000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 cp testdata/cp-test.txt ha-572000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 cp ha-572000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3497274976/001/cp-test_ha-572000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 cp ha-572000-m03:/home/docker/cp-test.txt ha-572000:/home/docker/cp-test_ha-572000-m03_ha-572000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000 "sudo cat /home/docker/cp-test_ha-572000-m03_ha-572000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 cp ha-572000-m03:/home/docker/cp-test.txt ha-572000-m02:/home/docker/cp-test_ha-572000-m03_ha-572000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m02 "sudo cat /home/docker/cp-test_ha-572000-m03_ha-572000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 cp ha-572000-m03:/home/docker/cp-test.txt ha-572000-m04:/home/docker/cp-test_ha-572000-m03_ha-572000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m04 "sudo cat /home/docker/cp-test_ha-572000-m03_ha-572000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 cp testdata/cp-test.txt ha-572000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3497274976/001/cp-test_ha-572000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt ha-572000:/home/docker/cp-test_ha-572000-m04_ha-572000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000 "sudo cat /home/docker/cp-test_ha-572000-m04_ha-572000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt ha-572000-m02:/home/docker/cp-test_ha-572000-m04_ha-572000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m02 "sudo cat /home/docker/cp-test_ha-572000-m04_ha-572000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt ha-572000-m03:/home/docker/cp-test_ha-572000-m04_ha-572000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 ssh -n ha-572000-m03 "sudo cat /home/docker/cp-test_ha-572000-m04_ha-572000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (9.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-572000 node stop m02 -v=7 --alsologtostderr: (8.35332869s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr: exit status 7 (357.93572ms)

                                                
                                                
-- stdout --
	ha-572000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-572000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-572000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-572000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:31:29.578683    3437 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:31:29.578989    3437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:31:29.578994    3437 out.go:304] Setting ErrFile to fd 2...
	I0717 10:31:29.578998    3437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:31:29.579188    3437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	I0717 10:31:29.579376    3437 out.go:298] Setting JSON to false
	I0717 10:31:29.579399    3437 mustload.go:65] Loading cluster: ha-572000
	I0717 10:31:29.579444    3437 notify.go:220] Checking for updates...
	I0717 10:31:29.579734    3437 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:31:29.579749    3437 status.go:255] checking status of ha-572000 ...
	I0717 10:31:29.580127    3437 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:31:29.580171    3437 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:31:29.588942    3437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51608
	I0717 10:31:29.589313    3437 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:31:29.589697    3437 main.go:141] libmachine: Using API Version  1
	I0717 10:31:29.589708    3437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:31:29.589916    3437 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:31:29.590030    3437 main.go:141] libmachine: (ha-572000) Calling .GetState
	I0717 10:31:29.590117    3437 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:31:29.590253    3437 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 2926
	I0717 10:31:29.591229    3437 status.go:330] ha-572000 host status = "Running" (err=<nil>)
	I0717 10:31:29.591248    3437 host.go:66] Checking if "ha-572000" exists ...
	I0717 10:31:29.591490    3437 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:31:29.591523    3437 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:31:29.599933    3437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51610
	I0717 10:31:29.600321    3437 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:31:29.600634    3437 main.go:141] libmachine: Using API Version  1
	I0717 10:31:29.600642    3437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:31:29.600885    3437 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:31:29.600995    3437 main.go:141] libmachine: (ha-572000) Calling .GetIP
	I0717 10:31:29.601073    3437 host.go:66] Checking if "ha-572000" exists ...
	I0717 10:31:29.601354    3437 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:31:29.601379    3437 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:31:29.610073    3437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51612
	I0717 10:31:29.610442    3437 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:31:29.610790    3437 main.go:141] libmachine: Using API Version  1
	I0717 10:31:29.610807    3437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:31:29.611005    3437 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:31:29.611120    3437 main.go:141] libmachine: (ha-572000) Calling .DriverName
	I0717 10:31:29.611272    3437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 10:31:29.611292    3437 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
	I0717 10:31:29.611373    3437 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
	I0717 10:31:29.611451    3437 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
	I0717 10:31:29.611527    3437 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
	I0717 10:31:29.611610    3437 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
	I0717 10:31:29.646133    3437 ssh_runner.go:195] Run: systemctl --version
	I0717 10:31:29.650671    3437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:31:29.665489    3437 kubeconfig.go:125] found "ha-572000" server: "https://192.169.0.254:8443"
	I0717 10:31:29.665513    3437 api_server.go:166] Checking apiserver status ...
	I0717 10:31:29.665565    3437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:31:29.677264    3437 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1988/cgroup
	W0717 10:31:29.685187    3437 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1988/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 10:31:29.685239    3437 ssh_runner.go:195] Run: ls
	I0717 10:31:29.688520    3437 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0717 10:31:29.691541    3437 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0717 10:31:29.691553    3437 status.go:422] ha-572000 apiserver status = Running (err=<nil>)
	I0717 10:31:29.691569    3437 status.go:257] ha-572000 status: &{Name:ha-572000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 10:31:29.691582    3437 status.go:255] checking status of ha-572000-m02 ...
	I0717 10:31:29.691908    3437 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:31:29.691929    3437 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:31:29.700580    3437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51616
	I0717 10:31:29.700974    3437 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:31:29.701311    3437 main.go:141] libmachine: Using API Version  1
	I0717 10:31:29.701323    3437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:31:29.701517    3437 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:31:29.701615    3437 main.go:141] libmachine: (ha-572000-m02) Calling .GetState
	I0717 10:31:29.701688    3437 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:31:29.701762    3437 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 2958
	I0717 10:31:29.702740    3437 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 2958 missing from process table
	I0717 10:31:29.702774    3437 status.go:330] ha-572000-m02 host status = "Stopped" (err=<nil>)
	I0717 10:31:29.702783    3437 status.go:343] host is not running, skipping remaining checks
	I0717 10:31:29.702789    3437 status.go:257] ha-572000-m02 status: &{Name:ha-572000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 10:31:29.702799    3437 status.go:255] checking status of ha-572000-m03 ...
	I0717 10:31:29.703058    3437 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:31:29.703082    3437 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:31:29.711461    3437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51618
	I0717 10:31:29.711801    3437 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:31:29.712164    3437 main.go:141] libmachine: Using API Version  1
	I0717 10:31:29.712178    3437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:31:29.712376    3437 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:31:29.712477    3437 main.go:141] libmachine: (ha-572000-m03) Calling .GetState
	I0717 10:31:29.712558    3437 main.go:141] libmachine: (ha-572000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:31:29.712649    3437 main.go:141] libmachine: (ha-572000-m03) DBG | hyperkit pid from json: 2972
	I0717 10:31:29.713653    3437 status.go:330] ha-572000-m03 host status = "Running" (err=<nil>)
	I0717 10:31:29.713664    3437 host.go:66] Checking if "ha-572000-m03" exists ...
	I0717 10:31:29.713912    3437 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:31:29.713954    3437 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:31:29.722506    3437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51620
	I0717 10:31:29.722876    3437 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:31:29.723222    3437 main.go:141] libmachine: Using API Version  1
	I0717 10:31:29.723245    3437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:31:29.723459    3437 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:31:29.723571    3437 main.go:141] libmachine: (ha-572000-m03) Calling .GetIP
	I0717 10:31:29.723677    3437 host.go:66] Checking if "ha-572000-m03" exists ...
	I0717 10:31:29.723945    3437 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:31:29.723968    3437 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:31:29.732395    3437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51622
	I0717 10:31:29.732742    3437 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:31:29.733076    3437 main.go:141] libmachine: Using API Version  1
	I0717 10:31:29.733085    3437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:31:29.733303    3437 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:31:29.733407    3437 main.go:141] libmachine: (ha-572000-m03) Calling .DriverName
	I0717 10:31:29.733545    3437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 10:31:29.733556    3437 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHHostname
	I0717 10:31:29.733631    3437 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHPort
	I0717 10:31:29.733711    3437 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHKeyPath
	I0717 10:31:29.733792    3437 main.go:141] libmachine: (ha-572000-m03) Calling .GetSSHUsername
	I0717 10:31:29.733869    3437 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m03/id_rsa Username:docker}
	I0717 10:31:29.768415    3437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:31:29.780500    3437 kubeconfig.go:125] found "ha-572000" server: "https://192.169.0.254:8443"
	I0717 10:31:29.780515    3437 api_server.go:166] Checking apiserver status ...
	I0717 10:31:29.780559    3437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:31:29.792415    3437 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2017/cgroup
	W0717 10:31:29.800355    3437 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2017/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 10:31:29.800402    3437 ssh_runner.go:195] Run: ls
	I0717 10:31:29.803587    3437 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0717 10:31:29.806502    3437 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0717 10:31:29.806513    3437 status.go:422] ha-572000-m03 apiserver status = Running (err=<nil>)
	I0717 10:31:29.806522    3437 status.go:257] ha-572000-m03 status: &{Name:ha-572000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 10:31:29.806532    3437 status.go:255] checking status of ha-572000-m04 ...
	I0717 10:31:29.806791    3437 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:31:29.806813    3437 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:31:29.815519    3437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51626
	I0717 10:31:29.815906    3437 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:31:29.816234    3437 main.go:141] libmachine: Using API Version  1
	I0717 10:31:29.816245    3437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:31:29.816449    3437 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:31:29.816554    3437 main.go:141] libmachine: (ha-572000-m04) Calling .GetState
	I0717 10:31:29.816627    3437 main.go:141] libmachine: (ha-572000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:31:29.816722    3437 main.go:141] libmachine: (ha-572000-m04) DBG | hyperkit pid from json: 3096
	I0717 10:31:29.817724    3437 status.go:330] ha-572000-m04 host status = "Running" (err=<nil>)
	I0717 10:31:29.817733    3437 host.go:66] Checking if "ha-572000-m04" exists ...
	I0717 10:31:29.817973    3437 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:31:29.817995    3437 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:31:29.826513    3437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51628
	I0717 10:31:29.826880    3437 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:31:29.827219    3437 main.go:141] libmachine: Using API Version  1
	I0717 10:31:29.827235    3437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:31:29.827430    3437 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:31:29.827542    3437 main.go:141] libmachine: (ha-572000-m04) Calling .GetIP
	I0717 10:31:29.827629    3437 host.go:66] Checking if "ha-572000-m04" exists ...
	I0717 10:31:29.827907    3437 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:31:29.827936    3437 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:31:29.836327    3437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51630
	I0717 10:31:29.836695    3437 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:31:29.837003    3437 main.go:141] libmachine: Using API Version  1
	I0717 10:31:29.837024    3437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:31:29.837215    3437 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:31:29.837331    3437 main.go:141] libmachine: (ha-572000-m04) Calling .DriverName
	I0717 10:31:29.837454    3437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 10:31:29.837466    3437 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHHostname
	I0717 10:31:29.837569    3437 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHPort
	I0717 10:31:29.837678    3437 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHKeyPath
	I0717 10:31:29.837765    3437 main.go:141] libmachine: (ha-572000-m04) Calling .GetSSHUsername
	I0717 10:31:29.837859    3437 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m04/id_rsa Username:docker}
	I0717 10:31:29.869187    3437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:31:29.880583    3437 status.go:257] ha-572000-m04 status: &{Name:ha-572000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (39.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 node start m02 -v=7 --alsologtostderr
E0717 10:31:33.859334    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-572000 node start m02 -v=7 --alsologtostderr: (39.070079176s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-572000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (39.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.33s)

                                                
                                    
x
+
TestJSONOutput/start/Command (206.64s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-213000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E0717 10:43:50.072821    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 10:44:48.142598    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 10:45:13.122749    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-213000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (3m26.638666853s)
--- PASS: TestJSONOutput/start/Command (206.64s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-213000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-213000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-213000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-213000 --output=json --user=testUser: (8.343775897s)
--- PASS: TestJSONOutput/stop/Command (8.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.57s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-026000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-026000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (358.130865ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ef7bd326-767f-43ba-9185-3561f2584fb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-026000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c123060-a607-43a9-b9b0-918fd46fc838","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19283"}}
	{"specversion":"1.0","id":"0b960199-777e-425d-8e23-a9832446d276","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig"}}
	{"specversion":"1.0","id":"ad759c0f-6eca-4f19-9407-abf52fcce7f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"a7a60f7a-9725-4cb1-b57e-b0f32a90a42e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"06316450-9a42-49ba-be45-5ae66a77052c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube"}}
	{"specversion":"1.0","id":"214772ac-34f9-4957-b9a4-a728af64e047","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d5882389-59d7-4959-a0cd-65b6fca479af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-026000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-026000
--- PASS: TestErrorJSONOutput (0.57s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (90.51s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-131000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-131000 --driver=hyperkit : (39.18922384s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-133000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-133000 --driver=hyperkit : (40.059832813s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-131000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-133000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-133000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-133000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-133000: (5.238406737s)
helpers_test.go:175: Cleaning up "first-131000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-131000
E0717 10:48:50.079429    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-131000: (5.276240354s)
--- PASS: TestMinikubeProfile (90.51s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-093000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-093000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (20.596787203s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.60s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-875000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0717 10:49:48.148992    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-875000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m49.309869296s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-875000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-875000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-875000 -- rollout status deployment/busybox: (2.478277581s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-875000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-875000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-875000 -- exec busybox-fc5497c4f-kfksv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-875000 -- exec busybox-fc5497c4f-sp4jf -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-875000 -- exec busybox-fc5497c4f-kfksv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-875000 -- exec busybox-fc5497c4f-sp4jf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-875000 -- exec busybox-fc5497c4f-kfksv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-875000 -- exec busybox-fc5497c4f-sp4jf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-875000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-875000 -- exec busybox-fc5497c4f-kfksv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-875000 -- exec busybox-fc5497c4f-kfksv -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-875000 -- exec busybox-fc5497c4f-sp4jf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-875000 -- exec busybox-fc5497c4f-sp4jf -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-875000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-875000 -v 3 --alsologtostderr: (47.430185261s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.74s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-875000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 cp testdata/cp-test.txt multinode-875000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 ssh -n multinode-875000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 cp multinode-875000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1394758527/001/cp-test_multinode-875000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 ssh -n multinode-875000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 cp multinode-875000:/home/docker/cp-test.txt multinode-875000-m02:/home/docker/cp-test_multinode-875000_multinode-875000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 ssh -n multinode-875000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 ssh -n multinode-875000-m02 "sudo cat /home/docker/cp-test_multinode-875000_multinode-875000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 cp multinode-875000:/home/docker/cp-test.txt multinode-875000-m03:/home/docker/cp-test_multinode-875000_multinode-875000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 ssh -n multinode-875000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 ssh -n multinode-875000-m03 "sudo cat /home/docker/cp-test_multinode-875000_multinode-875000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 cp testdata/cp-test.txt multinode-875000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 ssh -n multinode-875000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 cp multinode-875000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1394758527/001/cp-test_multinode-875000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 ssh -n multinode-875000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 cp multinode-875000-m02:/home/docker/cp-test.txt multinode-875000:/home/docker/cp-test_multinode-875000-m02_multinode-875000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 ssh -n multinode-875000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 ssh -n multinode-875000 "sudo cat /home/docker/cp-test_multinode-875000-m02_multinode-875000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 cp multinode-875000-m02:/home/docker/cp-test.txt multinode-875000-m03:/home/docker/cp-test_multinode-875000-m02_multinode-875000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 ssh -n multinode-875000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 ssh -n multinode-875000-m03 "sudo cat /home/docker/cp-test_multinode-875000-m02_multinode-875000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 cp testdata/cp-test.txt multinode-875000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 ssh -n multinode-875000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 cp multinode-875000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1394758527/001/cp-test_multinode-875000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 ssh -n multinode-875000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 cp multinode-875000-m03:/home/docker/cp-test.txt multinode-875000:/home/docker/cp-test_multinode-875000-m03_multinode-875000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 ssh -n multinode-875000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 ssh -n multinode-875000 "sudo cat /home/docker/cp-test_multinode-875000-m03_multinode-875000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 cp multinode-875000-m03:/home/docker/cp-test.txt multinode-875000-m02:/home/docker/cp-test_multinode-875000-m03_multinode-875000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 ssh -n multinode-875000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 ssh -n multinode-875000-m02 "sudo cat /home/docker/cp-test_multinode-875000-m03_multinode-875000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-875000 node stop m03: (2.333102605s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-875000 status: exit status 7 (245.908319ms)

                                                
                                                
-- stdout --
	multinode-875000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-875000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-875000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-875000 status --alsologtostderr: exit status 7 (242.507455ms)

                                                
                                                
-- stdout --
	multinode-875000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-875000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-875000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:52:07.521745    4446 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:52:07.521948    4446 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:52:07.521955    4446 out.go:304] Setting ErrFile to fd 2...
	I0717 10:52:07.521959    4446 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:52:07.522137    4446 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	I0717 10:52:07.522314    4446 out.go:298] Setting JSON to false
	I0717 10:52:07.522339    4446 mustload.go:65] Loading cluster: multinode-875000
	I0717 10:52:07.522379    4446 notify.go:220] Checking for updates...
	I0717 10:52:07.522680    4446 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:52:07.522697    4446 status.go:255] checking status of multinode-875000 ...
	I0717 10:52:07.523112    4446 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:52:07.523150    4446 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:52:07.532157    4446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53076
	I0717 10:52:07.532481    4446 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:52:07.532864    4446 main.go:141] libmachine: Using API Version  1
	I0717 10:52:07.532873    4446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:52:07.533066    4446 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:52:07.533161    4446 main.go:141] libmachine: (multinode-875000) Calling .GetState
	I0717 10:52:07.533240    4446 main.go:141] libmachine: (multinode-875000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:52:07.533308    4446 main.go:141] libmachine: (multinode-875000) DBG | hyperkit pid from json: 4146
	I0717 10:52:07.534480    4446 status.go:330] multinode-875000 host status = "Running" (err=<nil>)
	I0717 10:52:07.534498    4446 host.go:66] Checking if "multinode-875000" exists ...
	I0717 10:52:07.534740    4446 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:52:07.534767    4446 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:52:07.543154    4446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53078
	I0717 10:52:07.543530    4446 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:52:07.543886    4446 main.go:141] libmachine: Using API Version  1
	I0717 10:52:07.543915    4446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:52:07.544122    4446 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:52:07.544238    4446 main.go:141] libmachine: (multinode-875000) Calling .GetIP
	I0717 10:52:07.544321    4446 host.go:66] Checking if "multinode-875000" exists ...
	I0717 10:52:07.544582    4446 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:52:07.544605    4446 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:52:07.553083    4446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53080
	I0717 10:52:07.553412    4446 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:52:07.553727    4446 main.go:141] libmachine: Using API Version  1
	I0717 10:52:07.553743    4446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:52:07.553951    4446 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:52:07.554062    4446 main.go:141] libmachine: (multinode-875000) Calling .DriverName
	I0717 10:52:07.554206    4446 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 10:52:07.554227    4446 main.go:141] libmachine: (multinode-875000) Calling .GetSSHHostname
	I0717 10:52:07.554296    4446 main.go:141] libmachine: (multinode-875000) Calling .GetSSHPort
	I0717 10:52:07.554374    4446 main.go:141] libmachine: (multinode-875000) Calling .GetSSHKeyPath
	I0717 10:52:07.554488    4446 main.go:141] libmachine: (multinode-875000) Calling .GetSSHUsername
	I0717 10:52:07.554571    4446 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000/id_rsa Username:docker}
	I0717 10:52:07.582968    4446 ssh_runner.go:195] Run: systemctl --version
	I0717 10:52:07.587390    4446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:52:07.599432    4446 kubeconfig.go:125] found "multinode-875000" server: "https://192.169.0.15:8443"
	I0717 10:52:07.599458    4446 api_server.go:166] Checking apiserver status ...
	I0717 10:52:07.599495    4446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 10:52:07.610402    4446 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup
	W0717 10:52:07.617589    4446 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 10:52:07.617631    4446 ssh_runner.go:195] Run: ls
	I0717 10:52:07.620723    4446 api_server.go:253] Checking apiserver healthz at https://192.169.0.15:8443/healthz ...
	I0717 10:52:07.623777    4446 api_server.go:279] https://192.169.0.15:8443/healthz returned 200:
	ok
	I0717 10:52:07.623788    4446 status.go:422] multinode-875000 apiserver status = Running (err=<nil>)
	I0717 10:52:07.623801    4446 status.go:257] multinode-875000 status: &{Name:multinode-875000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 10:52:07.623814    4446 status.go:255] checking status of multinode-875000-m02 ...
	I0717 10:52:07.624060    4446 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:52:07.624085    4446 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:52:07.632705    4446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53084
	I0717 10:52:07.633042    4446 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:52:07.633412    4446 main.go:141] libmachine: Using API Version  1
	I0717 10:52:07.633428    4446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:52:07.633661    4446 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:52:07.633773    4446 main.go:141] libmachine: (multinode-875000-m02) Calling .GetState
	I0717 10:52:07.633862    4446 main.go:141] libmachine: (multinode-875000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:52:07.633931    4446 main.go:141] libmachine: (multinode-875000-m02) DBG | hyperkit pid from json: 4164
	I0717 10:52:07.635097    4446 status.go:330] multinode-875000-m02 host status = "Running" (err=<nil>)
	I0717 10:52:07.635104    4446 host.go:66] Checking if "multinode-875000-m02" exists ...
	I0717 10:52:07.635359    4446 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:52:07.635382    4446 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:52:07.643795    4446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53086
	I0717 10:52:07.644150    4446 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:52:07.644479    4446 main.go:141] libmachine: Using API Version  1
	I0717 10:52:07.644490    4446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:52:07.644690    4446 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:52:07.644812    4446 main.go:141] libmachine: (multinode-875000-m02) Calling .GetIP
	I0717 10:52:07.644885    4446 host.go:66] Checking if "multinode-875000-m02" exists ...
	I0717 10:52:07.645153    4446 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:52:07.645178    4446 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:52:07.653749    4446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53088
	I0717 10:52:07.654119    4446 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:52:07.654438    4446 main.go:141] libmachine: Using API Version  1
	I0717 10:52:07.654448    4446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:52:07.654644    4446 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:52:07.654756    4446 main.go:141] libmachine: (multinode-875000-m02) Calling .DriverName
	I0717 10:52:07.654874    4446 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 10:52:07.654885    4446 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHHostname
	I0717 10:52:07.654960    4446 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHPort
	I0717 10:52:07.655038    4446 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHKeyPath
	I0717 10:52:07.655119    4446 main.go:141] libmachine: (multinode-875000-m02) Calling .GetSSHUsername
	I0717 10:52:07.655196    4446 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/multinode-875000-m02/id_rsa Username:docker}
	I0717 10:52:07.686079    4446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 10:52:07.696362    4446 status.go:257] multinode-875000-m02 status: &{Name:multinode-875000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0717 10:52:07.696378    4446 status.go:255] checking status of multinode-875000-m03 ...
	I0717 10:52:07.696651    4446 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:52:07.696676    4446 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:52:07.705322    4446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53091
	I0717 10:52:07.705665    4446 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:52:07.706010    4446 main.go:141] libmachine: Using API Version  1
	I0717 10:52:07.706024    4446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:52:07.706226    4446 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:52:07.706323    4446 main.go:141] libmachine: (multinode-875000-m03) Calling .GetState
	I0717 10:52:07.706404    4446 main.go:141] libmachine: (multinode-875000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:52:07.706474    4446 main.go:141] libmachine: (multinode-875000-m03) DBG | hyperkit pid from json: 4235
	I0717 10:52:07.707642    4446 main.go:141] libmachine: (multinode-875000-m03) DBG | hyperkit pid 4235 missing from process table
	I0717 10:52:07.707664    4446 status.go:330] multinode-875000-m03 host status = "Stopped" (err=<nil>)
	I0717 10:52:07.707671    4446 status.go:343] host is not running, skipping remaining checks
	I0717 10:52:07.707678    4446 status.go:257] multinode-875000-m03 status: &{Name:multinode-875000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.82s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-875000 node start m03 -v=7 --alsologtostderr: (41.403333096s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (8.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-875000 node delete m03: (7.839974847s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (8.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-875000 stop: (16.642255979s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-875000 status: exit status 7 (85.186107ms)

                                                
                                                
-- stdout --
	multinode-875000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-875000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-875000 status --alsologtostderr: exit status 7 (78.049498ms)

                                                
                                                
-- stdout --
	multinode-875000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-875000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:57:02.836639    4687 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:57:02.836923    4687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:57:02.836929    4687 out.go:304] Setting ErrFile to fd 2...
	I0717 10:57:02.836933    4687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:57:02.837116    4687 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
	I0717 10:57:02.837293    4687 out.go:298] Setting JSON to false
	I0717 10:57:02.837317    4687 mustload.go:65] Loading cluster: multinode-875000
	I0717 10:57:02.837357    4687 notify.go:220] Checking for updates...
	I0717 10:57:02.837621    4687 config.go:182] Loaded profile config "multinode-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:57:02.837638    4687 status.go:255] checking status of multinode-875000 ...
	I0717 10:57:02.837980    4687 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:57:02.838036    4687 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:57:02.846773    4687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53329
	I0717 10:57:02.847121    4687 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:57:02.847519    4687 main.go:141] libmachine: Using API Version  1
	I0717 10:57:02.847532    4687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:57:02.847741    4687 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:57:02.847853    4687 main.go:141] libmachine: (multinode-875000) Calling .GetState
	I0717 10:57:02.847945    4687 main.go:141] libmachine: (multinode-875000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:57:02.848009    4687 main.go:141] libmachine: (multinode-875000) DBG | hyperkit pid from json: 4506
	I0717 10:57:02.848942    4687 main.go:141] libmachine: (multinode-875000) DBG | hyperkit pid 4506 missing from process table
	I0717 10:57:02.848994    4687 status.go:330] multinode-875000 host status = "Stopped" (err=<nil>)
	I0717 10:57:02.849006    4687 status.go:343] host is not running, skipping remaining checks
	I0717 10:57:02.849013    4687 status.go:257] multinode-875000 status: &{Name:multinode-875000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 10:57:02.849035    4687 status.go:255] checking status of multinode-875000-m02 ...
	I0717 10:57:02.849283    4687 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0717 10:57:02.849312    4687 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0717 10:57:02.857601    4687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53331
	I0717 10:57:02.857954    4687 main.go:141] libmachine: () Calling .GetVersion
	I0717 10:57:02.858331    4687 main.go:141] libmachine: Using API Version  1
	I0717 10:57:02.858349    4687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 10:57:02.858547    4687 main.go:141] libmachine: () Calling .GetMachineName
	I0717 10:57:02.858651    4687 main.go:141] libmachine: (multinode-875000-m02) Calling .GetState
	I0717 10:57:02.858742    4687 main.go:141] libmachine: (multinode-875000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0717 10:57:02.858806    4687 main.go:141] libmachine: (multinode-875000-m02) DBG | hyperkit pid from json: 4537
	I0717 10:57:02.859732    4687 main.go:141] libmachine: (multinode-875000-m02) DBG | hyperkit pid 4537 missing from process table
	I0717 10:57:02.859758    4687 status.go:330] multinode-875000-m02 host status = "Stopped" (err=<nil>)
	I0717 10:57:02.859767    4687 status.go:343] host is not running, skipping remaining checks
	I0717 10:57:02.859774    4687 status.go:257] multinode-875000-m02 status: &{Name:multinode-875000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (123.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-875000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0717 10:58:50.107674    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-875000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (2m3.342274107s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-875000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (123.68s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (163.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-875000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-875000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-875000-m02 --driver=hyperkit : exit status 14 (466.815712ms)

                                                
                                                
-- stdout --
	* [multinode-875000-m02] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-875000-m02' is duplicated with machine name 'multinode-875000-m02' in profile 'multinode-875000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-875000-m03 --driver=hyperkit 
E0717 10:59:48.175285    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-875000-m03 --driver=hyperkit : (2m34.676055593s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-875000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-875000: exit status 80 (263.593629ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-875000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-875000-m03 already exists in multinode-875000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-875000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-875000-m03: (7.696234792s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (163.16s)

                                                
                                    
x
+
TestPreload (140.65s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-822000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-822000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m20.07396889s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-822000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-822000 image pull gcr.io/k8s-minikube/busybox: (1.402268778s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-822000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-822000: (8.404928223s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-822000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
E0717 11:03:50.114457    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-822000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (45.377275815s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-822000 image list
helpers_test.go:175: Cleaning up "test-preload-822000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-822000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-822000: (5.236983426s)
--- PASS: TestPreload (140.65s)

                                                
                                    
x
+
TestScheduledStopUnix (223.79s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-555000 --memory=2048 --driver=hyperkit 
E0717 11:04:48.183348    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-555000 --memory=2048 --driver=hyperkit : (2m32.332749851s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-555000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-555000 -n scheduled-stop-555000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-555000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-555000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-555000 -n scheduled-stop-555000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-555000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-555000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-555000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-555000: exit status 7 (70.688885ms)

                                                
                                                
-- stdout --
	scheduled-stop-555000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-555000 -n scheduled-stop-555000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-555000 -n scheduled-stop-555000: exit status 7 (66.878487ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-555000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-555000
--- PASS: TestScheduledStopUnix (223.79s)

                                                
                                    
x
+
TestSkaffold (114.04s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3361056449 version
skaffold_test.go:59: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3361056449 version: (1.711775094s)
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-449000 --memory=2600 --driver=hyperkit 
E0717 11:08:50.150557    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-449000 --memory=2600 --driver=hyperkit : (39.481891811s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3361056449 run --minikube-profile skaffold-449000 --kube-context skaffold-449000 --status-check=true --port-forward=false --interactive=false
E0717 11:09:31.267167    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3361056449 run --minikube-profile skaffold-449000 --kube-context skaffold-449000 --status-check=true --port-forward=false --interactive=false: (55.256855172s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5dc765859f-dw72d" [b38378dd-2353-4da1-bb93-fc3265fc2812] Running
E0717 11:09:48.219169    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004018962s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-67679f9cfd-wwnqj" [bcce8523-f5bc-4eb7-acb6-a720ca01d3c2] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004246503s
helpers_test.go:175: Cleaning up "skaffold-449000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-449000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-449000: (5.237274351s)
--- PASS: TestSkaffold (114.04s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (86.93s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1850386288 start -p running-upgrade-031000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:120: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1850386288 start -p running-upgrade-031000 --memory=2200 --vm-driver=hyperkit : (46.975822893s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-031000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0717 11:14:45.822628    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:14:45.828475    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:14:45.839647    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:14:45.860102    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:14:45.900337    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:14:45.981757    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:14:46.143211    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:14:46.464761    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:14:47.105441    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:14:48.228437    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 11:14:48.387282    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:14:50.948739    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:14:56.069889    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:15:06.312318    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-031000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (33.523732057s)
helpers_test.go:175: Cleaning up "running-upgrade-031000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-031000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-031000: (5.320788765s)
--- PASS: TestRunningBinaryUpgrade (86.93s)

                                                
                                    
x
+
TestKubernetesUpgrade (124.57s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-386000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
E0717 11:15:26.793499    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:16:07.754981    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-386000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (52.903206464s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-386000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-386000: (8.401093498s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-386000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-386000 status --format={{.Host}}: exit status 7 (67.926095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-386000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-386000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit : (34.525679798s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-386000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-386000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-386000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (565.311334ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-386000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-386000
	    minikube start -p kubernetes-upgrade-386000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3860002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-386000 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-386000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-386000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit : (24.599569804s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-386000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-386000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-386000: (3.45855536s)
--- PASS: TestKubernetesUpgrade (124.57s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.46s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19283
- KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current936073688/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current936073688/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current936073688/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current936073688/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.46s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.96s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19283
- KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3088357473/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3088357473/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3088357473/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3088357473/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (88.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.709152341 start -p stopped-upgrade-488000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:183: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.709152341 start -p stopped-upgrade-488000 --memory=2200 --vm-driver=hyperkit : (48.430243155s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.709152341 -p stopped-upgrade-488000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.709152341 -p stopped-upgrade-488000 stop: (3.229736747s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-488000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-488000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (37.275020946s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (88.94s)

                                                
                                    
x
+
TestPause/serial/Start (58.31s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-956000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
E0717 11:17:29.678999    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-956000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (58.310997237s)
--- PASS: TestPause/serial/Start (58.31s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (41.11s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-956000 --alsologtostderr -v=1 --driver=hyperkit 
E0717 11:18:33.217153    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 11:18:50.166586    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-956000 --alsologtostderr -v=1 --driver=hyperkit : (41.091497306s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (41.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-488000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-488000: (3.075786219s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-651000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-651000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (498.225585ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-651000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-651000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-651000 --driver=hyperkit : (39.647124307s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-651000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.81s)

                                                
                                    
x
+
TestPause/serial/Pause (0.57s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-956000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.57s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-956000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-956000 --output=json --layout=cluster: exit status 2 (161.376144ms)

                                                
                                                
-- stdout --
	{"Name":"pause-956000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-956000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.16s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.53s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-956000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.53s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.6s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-956000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.60s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.24s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-956000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-956000 --alsologtostderr -v=5: (5.238967292s)
--- PASS: TestPause/serial/DeletePaused (5.24s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (93.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-718000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-718000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (1m33.387602948s)
--- PASS: TestNetworkPlugins/group/auto/Start (93.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-651000 --no-kubernetes --driver=hyperkit 
E0717 11:19:45.831478    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:19:48.237055    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-651000 --no-kubernetes --driver=hyperkit : (15.035987291s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-651000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-651000 status -o json: exit status 2 (144.93922ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-651000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-651000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-651000: (2.387080765s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (20.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-651000 --no-kubernetes --driver=hyperkit 
E0717 11:20:13.525784    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-651000 --no-kubernetes --driver=hyperkit : (20.821744038s)
--- PASS: TestNoKubernetes/serial/Start (20.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-651000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-651000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (128.085976ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-651000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-651000: (2.451235s)
--- PASS: TestNoKubernetes/serial/Stop (2.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-651000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-651000 --driver=hyperkit : (19.373622083s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-651000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-651000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (127.149899ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (196.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-718000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-718000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (3m16.251274992s)
--- PASS: TestNetworkPlugins/group/calico/Start (196.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-718000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-718000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-9d6dv" [80a5743a-2781-49f8-8e8f-427c52c0a8d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-9d6dv" [80a5743a-2781-49f8-8e8f-427c52c0a8d6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.005197663s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-718000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-718000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-718000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (450.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-718000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
E0717 11:23:50.158938    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-718000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (7m30.467964991s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (450.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-jpxbf" [977cfd1d-9fdd-48e1-92a7-60e34d93ed55] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00648445s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-718000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-718000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-29zdn" [8991dc3f-02cb-4818-9d46-4012e22b61b6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-29zdn" [8991dc3f-02cb-4818-9d46-4012e22b61b6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004248001s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-718000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-718000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-718000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (270.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-718000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
E0717 11:24:45.822693    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:24:48.226728    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 11:25:49.101568    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
E0717 11:25:49.107323    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
E0717 11:25:49.118230    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
E0717 11:25:49.140464    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
E0717 11:25:49.181513    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
E0717 11:25:49.262376    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
E0717 11:25:49.424558    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
E0717 11:25:49.746807    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
E0717 11:25:50.389189    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
E0717 11:25:51.671217    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
E0717 11:25:54.233522    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
E0717 11:25:59.354561    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
E0717 11:26:09.597134    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
E0717 11:26:11.279985    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 11:26:30.079556    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
E0717 11:27:11.041018    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
E0717 11:28:32.962987    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-718000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (4m30.807459361s)
--- PASS: TestNetworkPlugins/group/false/Start (270.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-718000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-718000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-g4fwc" [fc61e93b-61da-4945-9c0e-0838fe161d57] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 11:28:50.165447    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-g4fwc" [fc61e93b-61da-4945-9c0e-0838fe161d57] Running
E0717 11:28:58.629679    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/calico-718000/client.crt: no such file or directory
E0717 11:28:58.636079    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/calico-718000/client.crt: no such file or directory
E0717 11:28:58.648265    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/calico-718000/client.crt: no such file or directory
E0717 11:28:58.669408    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/calico-718000/client.crt: no such file or directory
E0717 11:28:58.710989    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/calico-718000/client.crt: no such file or directory
E0717 11:28:58.791123    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/calico-718000/client.crt: no such file or directory
E0717 11:28:58.952501    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/calico-718000/client.crt: no such file or directory
E0717 11:28:59.274253    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/calico-718000/client.crt: no such file or directory
E0717 11:28:59.914527    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/calico-718000/client.crt: no such file or directory
E0717 11:29:01.194919    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/calico-718000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.003474051s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-718000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-718000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-718000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-718000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-718000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bjnpp" [158fb1ec-c19a-47cc-82c1-37f2c8bdd731] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 11:29:08.878107    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/calico-718000/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-bjnpp" [158fb1ec-c19a-47cc-82c1-37f2c8bdd731] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.004707088s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-718000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-718000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-718000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (73.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-718000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-718000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (1m13.026038538s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (73.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (182.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-718000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
E0717 11:29:39.601264    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/calico-718000/client.crt: no such file or directory
E0717 11:29:45.827069    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:29:48.232864    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 11:30:20.562977    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/calico-718000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-718000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (3m2.822923548s)
--- PASS: TestNetworkPlugins/group/flannel/Start (182.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7vkgz" [24e4b863-8a8f-4e09-9561-9bfdc8f08ff9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004113145s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-718000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-718000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7pp8p" [3af21ba5-a296-4554-96b6-651d15a0ffb7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7pp8p" [3af21ba5-a296-4554-96b6-651d15a0ffb7] Running
E0717 11:30:49.109027    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.002425642s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-718000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-718000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-718000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (52.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-718000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
E0717 11:31:08.884676    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:31:16.806998    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
E0717 11:31:42.484948    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/calico-718000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-718000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (52.402047277s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (52.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-718000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-718000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-smqgg" [06826315-9069-4258-885b-46618549e44d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-smqgg" [06826315-9069-4258-885b-46618549e44d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.008241432s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-718000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-718000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-718000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (170.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-718000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-718000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (2m50.007119306s)
--- PASS: TestNetworkPlugins/group/bridge/Start (170.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5qg2c" [cd76d086-e7e3-4a85-bd47-d5d3c024076a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004779044s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-718000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-718000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-45kbt" [833cb1ac-12c4-4f97-9996-eda2294bc7f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-45kbt" [833cb1ac-12c4-4f97-9996-eda2294bc7f6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004837154s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-718000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-718000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-718000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (52.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-718000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
E0717 11:33:49.090751    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
E0717 11:33:49.097183    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
E0717 11:33:49.108587    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
E0717 11:33:49.130523    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
E0717 11:33:49.172429    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
E0717 11:33:49.254600    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
E0717 11:33:49.416809    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
E0717 11:33:49.737299    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
E0717 11:33:50.170217    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 11:33:50.379091    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
E0717 11:33:51.659806    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
E0717 11:33:54.220019    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
E0717 11:33:58.634401    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/calico-718000/client.crt: no such file or directory
E0717 11:33:59.341623    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-718000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (52.313677915s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (52.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-718000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-718000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rp9lf" [01ccb52c-627a-49a9-8869-56e974b804ec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 11:34:04.727342    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
E0717 11:34:04.733228    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
E0717 11:34:04.745352    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
E0717 11:34:04.766098    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
E0717 11:34:04.806880    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
E0717 11:34:04.887742    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
E0717 11:34:05.048589    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
E0717 11:34:05.370386    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
E0717 11:34:06.010871    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
E0717 11:34:07.291050    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-rp9lf" [01ccb52c-627a-49a9-8869-56e974b804ec] Running
E0717 11:34:09.583688    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
E0717 11:34:09.852395    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.004793969s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (33.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-718000 exec deployment/netcat -- nslookup kubernetes.default
E0717 11:34:14.974205    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
E0717 11:34:25.216024    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
E0717 11:34:26.330227    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/calico-718000/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context kubenet-718000 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.114007603s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0717 11:34:30.064399    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
net_test.go:175: (dbg) Run:  kubectl --context kubenet-718000 exec deployment/netcat -- nslookup kubernetes.default
E0717 11:34:45.697915    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
E0717 11:34:45.833718    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context kubenet-718000 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.111181987s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context kubenet-718000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (33.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-718000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0717 11:34:48.238926    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-718000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (146.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-831000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E0717 11:35:11.026751    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
E0717 11:35:13.224397    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-831000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (2m26.589718726s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (146.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-718000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-718000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rspkk" [00d53192-604e-47ba-9818-8db8eab1d706] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-rspkk" [00d53192-604e-47ba-9818-8db8eab1d706] Running
E0717 11:35:26.660219    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.002838825s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-718000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-718000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-718000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (91.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-820000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
E0717 11:35:49.114799    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
E0717 11:35:53.493529    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kindnet-718000/client.crt: no such file or directory
E0717 11:36:13.975695    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kindnet-718000/client.crt: no such file or directory
E0717 11:36:32.949809    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
E0717 11:36:48.582307    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
E0717 11:36:54.937391    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kindnet-718000/client.crt: no such file or directory
E0717 11:37:00.478104    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/enable-default-cni-718000/client.crt: no such file or directory
E0717 11:37:00.483266    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/enable-default-cni-718000/client.crt: no such file or directory
E0717 11:37:00.493884    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/enable-default-cni-718000/client.crt: no such file or directory
E0717 11:37:00.514456    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/enable-default-cni-718000/client.crt: no such file or directory
E0717 11:37:00.554798    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/enable-default-cni-718000/client.crt: no such file or directory
E0717 11:37:00.636056    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/enable-default-cni-718000/client.crt: no such file or directory
E0717 11:37:00.796783    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/enable-default-cni-718000/client.crt: no such file or directory
E0717 11:37:01.117791    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/enable-default-cni-718000/client.crt: no such file or directory
E0717 11:37:01.758139    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/enable-default-cni-718000/client.crt: no such file or directory
E0717 11:37:03.039892    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/enable-default-cni-718000/client.crt: no such file or directory
E0717 11:37:05.601300    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/enable-default-cni-718000/client.crt: no such file or directory
E0717 11:37:10.721938    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/enable-default-cni-718000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-820000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (1m31.489824798s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (91.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-820000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [272b8e71-005f-4718-a2ac-6e5ff2a3d201] Pending
helpers_test.go:344: "busybox" [272b8e71-005f-4718-a2ac-6e5ff2a3d201] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0717 11:37:20.963336    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/enable-default-cni-718000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [272b8e71-005f-4718-a2ac-6e5ff2a3d201] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004231298s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-820000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-820000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-820000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-820000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-820000 --alsologtostderr -v=3: (8.449653984s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-831000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ab38d781-6fbf-4667-af21-b421f7ff0ee4] Pending
helpers_test.go:344: "busybox" [ab38d781-6fbf-4667-af21-b421f7ff0ee4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ab38d781-6fbf-4667-af21-b421f7ff0ee4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.002901098s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-831000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000: exit status 7 (69.571777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-820000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (289.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-820000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
E0717 11:37:36.945388    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/flannel-718000/client.crt: no such file or directory
E0717 11:37:36.951599    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/flannel-718000/client.crt: no such file or directory
E0717 11:37:36.962141    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/flannel-718000/client.crt: no such file or directory
E0717 11:37:36.983213    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/flannel-718000/client.crt: no such file or directory
E0717 11:37:37.025428    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/flannel-718000/client.crt: no such file or directory
E0717 11:37:37.106959    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/flannel-718000/client.crt: no such file or directory
E0717 11:37:37.268406    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/flannel-718000/client.crt: no such file or directory
E0717 11:37:37.589156    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/flannel-718000/client.crt: no such file or directory
E0717 11:37:38.230741    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/flannel-718000/client.crt: no such file or directory
E0717 11:37:39.511064    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/flannel-718000/client.crt: no such file or directory
E0717 11:37:41.444924    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/enable-default-cni-718000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-820000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (4m49.381113448s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (289.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-831000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0717 11:37:42.071554    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/flannel-718000/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-831000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-831000 --alsologtostderr -v=3
E0717 11:37:47.192149    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/flannel-718000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-831000 --alsologtostderr -v=3: (8.396333674s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-831000 -n old-k8s-version-831000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-831000 -n old-k8s-version-831000: exit status 7 (67.623868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-831000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (380.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-831000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E0717 11:37:57.432806    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/flannel-718000/client.crt: no such file or directory
E0717 11:38:16.859446    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kindnet-718000/client.crt: no such file or directory
E0717 11:38:17.914013    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/flannel-718000/client.crt: no such file or directory
E0717 11:38:22.407368    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/enable-default-cni-718000/client.crt: no such file or directory
E0717 11:38:49.097274    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
E0717 11:38:50.177235    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 11:38:58.641482    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/calico-718000/client.crt: no such file or directory
E0717 11:38:58.875010    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/flannel-718000/client.crt: no such file or directory
E0717 11:39:03.592840    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kubenet-718000/client.crt: no such file or directory
E0717 11:39:03.598128    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kubenet-718000/client.crt: no such file or directory
E0717 11:39:03.608509    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kubenet-718000/client.crt: no such file or directory
E0717 11:39:03.630645    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kubenet-718000/client.crt: no such file or directory
E0717 11:39:03.671704    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kubenet-718000/client.crt: no such file or directory
E0717 11:39:03.752064    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kubenet-718000/client.crt: no such file or directory
E0717 11:39:03.912953    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kubenet-718000/client.crt: no such file or directory
E0717 11:39:04.233420    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kubenet-718000/client.crt: no such file or directory
E0717 11:39:04.732522    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
E0717 11:39:04.874695    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kubenet-718000/client.crt: no such file or directory
E0717 11:39:06.155699    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kubenet-718000/client.crt: no such file or directory
E0717 11:39:08.716091    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kubenet-718000/client.crt: no such file or directory
E0717 11:39:13.836341    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kubenet-718000/client.crt: no such file or directory
E0717 11:39:16.793087    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
E0717 11:39:24.077524    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kubenet-718000/client.crt: no such file or directory
E0717 11:39:32.426253    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
E0717 11:39:44.328191    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/enable-default-cni-718000/client.crt: no such file or directory
E0717 11:39:44.558924    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kubenet-718000/client.crt: no such file or directory
E0717 11:39:45.839166    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:39:48.245504    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 11:40:19.119335    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/bridge-718000/client.crt: no such file or directory
E0717 11:40:19.124652    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/bridge-718000/client.crt: no such file or directory
E0717 11:40:19.136126    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/bridge-718000/client.crt: no such file or directory
E0717 11:40:19.156358    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/bridge-718000/client.crt: no such file or directory
E0717 11:40:19.197074    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/bridge-718000/client.crt: no such file or directory
E0717 11:40:19.278938    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/bridge-718000/client.crt: no such file or directory
E0717 11:40:19.440031    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/bridge-718000/client.crt: no such file or directory
E0717 11:40:19.760206    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/bridge-718000/client.crt: no such file or directory
E0717 11:40:20.401964    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/bridge-718000/client.crt: no such file or directory
E0717 11:40:20.796767    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/flannel-718000/client.crt: no such file or directory
E0717 11:40:21.683169    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/bridge-718000/client.crt: no such file or directory
E0717 11:40:24.243437    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/bridge-718000/client.crt: no such file or directory
E0717 11:40:25.520217    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kubenet-718000/client.crt: no such file or directory
E0717 11:40:29.363783    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/bridge-718000/client.crt: no such file or directory
E0717 11:40:33.010960    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kindnet-718000/client.crt: no such file or directory
E0717 11:40:39.604271    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/bridge-718000/client.crt: no such file or directory
E0717 11:40:49.119046    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
E0717 11:41:00.085111    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/bridge-718000/client.crt: no such file or directory
E0717 11:41:00.701684    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kindnet-718000/client.crt: no such file or directory
E0717 11:41:41.046374    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/bridge-718000/client.crt: no such file or directory
E0717 11:41:47.443349    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kubenet-718000/client.crt: no such file or directory
E0717 11:42:00.483060    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/enable-default-cni-718000/client.crt: no such file or directory
E0717 11:42:12.180315    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-831000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (6m19.937791202s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-831000 -n old-k8s-version-831000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (380.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-zmkrm" [571cf947-c1f5-4c6d-b497-85b758aa6f43] Running
E0717 11:42:28.173419    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/enable-default-cni-718000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006270391s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-zmkrm" [571cf947-c1f5-4c6d-b497-85b758aa6f43] Running
E0717 11:42:36.950307    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/flannel-718000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004587972s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-820000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-820000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (1.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-820000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-820000 -n no-preload-820000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-820000 -n no-preload-820000: exit status 2 (169.877404ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-820000 -n no-preload-820000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-820000 -n no-preload-820000: exit status 2 (169.835303ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-820000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-820000 -n no-preload-820000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-820000 -n no-preload-820000
--- PASS: TestStartStop/group/no-preload/serial/Pause (1.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (52.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-489000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.2
E0717 11:42:51.300141    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 11:43:02.968839    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/bridge-718000/client.crt: no such file or directory
E0717 11:43:04.641637    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/flannel-718000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-489000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.2: (52.387454732s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (52.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-489000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [297688c9-0e98-40a3-87a9-3ddfd4acc9a3] Pending
helpers_test.go:344: "busybox" [297688c9-0e98-40a3-87a9-3ddfd4acc9a3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [297688c9-0e98-40a3-87a9-3ddfd4acc9a3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004708365s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-489000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-489000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-489000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-489000 --alsologtostderr -v=3
E0717 11:43:49.102699    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
E0717 11:43:50.182675    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-489000 --alsologtostderr -v=3: (8.434610258s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000: exit status 7 (66.248591ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-489000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (309.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-489000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.2
E0717 11:43:58.645766    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/calico-718000/client.crt: no such file or directory
E0717 11:44:03.598759    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kubenet-718000/client.crt: no such file or directory
E0717 11:44:04.737446    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-489000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.2: (5m9.041508583s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (309.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-jsgfx" [2c35013e-89ed-4f39-9255-3a538194f18a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002196516s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-jsgfx" [2c35013e-89ed-4f39-9255-3a538194f18a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005410627s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-831000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-831000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-831000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-831000 -n old-k8s-version-831000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-831000 -n old-k8s-version-831000: exit status 2 (159.343681ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-831000 -n old-k8s-version-831000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-831000 -n old-k8s-version-831000: exit status 2 (154.407228ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-831000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-831000 -n old-k8s-version-831000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-831000 -n old-k8s-version-831000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-277000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.2
E0717 11:44:31.288953    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kubenet-718000/client.crt: no such file or directory
E0717 11:44:45.844926    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:44:48.249758    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 11:45:19.125834    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/bridge-718000/client.crt: no such file or directory
E0717 11:45:21.703209    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/calico-718000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-277000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.2: (51.862733907s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-277000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a64e30a6-7caf-4c1f-b3ee-f92ab6ef94b0] Pending
helpers_test.go:344: "busybox" [a64e30a6-7caf-4c1f-b3ee-f92ab6ef94b0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a64e30a6-7caf-4c1f-b3ee-f92ab6ef94b0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.003550406s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-277000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-277000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-277000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-277000 --alsologtostderr -v=3
E0717 11:45:33.018954    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kindnet-718000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-277000 --alsologtostderr -v=3: (8.432021045s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-277000 -n default-k8s-diff-port-277000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-277000 -n default-k8s-diff-port-277000: exit status 7 (66.948236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-277000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (308.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-277000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.2
E0717 11:45:46.813550    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/bridge-718000/client.crt: no such file or directory
E0717 11:45:49.125976    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
E0717 11:47:00.488685    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/enable-default-cni-718000/client.crt: no such file or directory
E0717 11:47:19.228577    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/no-preload-820000/client.crt: no such file or directory
E0717 11:47:19.234926    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/no-preload-820000/client.crt: no such file or directory
E0717 11:47:19.246550    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/no-preload-820000/client.crt: no such file or directory
E0717 11:47:19.266953    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/no-preload-820000/client.crt: no such file or directory
E0717 11:47:19.307967    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/no-preload-820000/client.crt: no such file or directory
E0717 11:47:19.389346    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/no-preload-820000/client.crt: no such file or directory
E0717 11:47:19.549539    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/no-preload-820000/client.crt: no such file or directory
E0717 11:47:19.869880    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/no-preload-820000/client.crt: no such file or directory
E0717 11:47:20.510391    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/no-preload-820000/client.crt: no such file or directory
E0717 11:47:21.791334    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/no-preload-820000/client.crt: no such file or directory
E0717 11:47:24.352652    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/no-preload-820000/client.crt: no such file or directory
E0717 11:47:29.474447    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/no-preload-820000/client.crt: no such file or directory
E0717 11:47:32.466493    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/old-k8s-version-831000/client.crt: no such file or directory
E0717 11:47:32.472353    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/old-k8s-version-831000/client.crt: no such file or directory
E0717 11:47:32.484540    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/old-k8s-version-831000/client.crt: no such file or directory
E0717 11:47:32.505438    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/old-k8s-version-831000/client.crt: no such file or directory
E0717 11:47:32.546671    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/old-k8s-version-831000/client.crt: no such file or directory
E0717 11:47:32.628868    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/old-k8s-version-831000/client.crt: no such file or directory
E0717 11:47:32.790530    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/old-k8s-version-831000/client.crt: no such file or directory
E0717 11:47:33.112180    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/old-k8s-version-831000/client.crt: no such file or directory
E0717 11:47:33.753346    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/old-k8s-version-831000/client.crt: no such file or directory
E0717 11:47:35.034066    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/old-k8s-version-831000/client.crt: no such file or directory
E0717 11:47:36.956518    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/flannel-718000/client.crt: no such file or directory
E0717 11:47:37.595343    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/old-k8s-version-831000/client.crt: no such file or directory
E0717 11:47:39.716423    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/no-preload-820000/client.crt: no such file or directory
E0717 11:47:42.717273    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/old-k8s-version-831000/client.crt: no such file or directory
E0717 11:47:48.904958    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:47:52.957872    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/old-k8s-version-831000/client.crt: no such file or directory
E0717 11:48:00.198671    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/no-preload-820000/client.crt: no such file or directory
E0717 11:48:13.439339    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/old-k8s-version-831000/client.crt: no such file or directory
E0717 11:48:41.159680    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/no-preload-820000/client.crt: no such file or directory
E0717 11:48:49.108115    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
E0717 11:48:50.187498    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
E0717 11:48:54.400758    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/old-k8s-version-831000/client.crt: no such file or directory
E0717 11:48:58.653569    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/calico-718000/client.crt: no such file or directory
E0717 11:49:03.605268    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kubenet-718000/client.crt: no such file or directory
E0717 11:49:04.745011    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-277000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.2: (5m8.043241026s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-277000 -n default-k8s-diff-port-277000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (308.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-kvcx9" [094174d3-5eba-401e-95a8-eb4e26555ddf] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-kvcx9" [094174d3-5eba-401e-95a8-eb4e26555ddf] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.005200623s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-kvcx9" [094174d3-5eba-401e-95a8-eb4e26555ddf] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003533754s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-489000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-489000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-489000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-489000 -n embed-certs-489000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-489000 -n embed-certs-489000: exit status 2 (158.335518ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-489000 -n embed-certs-489000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-489000 -n embed-certs-489000: exit status 2 (156.752297ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-489000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-489000 -n embed-certs-489000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-489000 -n embed-certs-489000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-791000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
E0717 11:49:45.852487    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/skaffold-449000/client.crt: no such file or directory
E0717 11:49:48.256374    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0717 11:50:03.082493    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/no-preload-820000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-791000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (41.664002608s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-791000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-791000 --alsologtostderr -v=3
E0717 11:50:12.166902    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/custom-flannel-718000/client.crt: no such file or directory
E0717 11:50:16.323040    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/old-k8s-version-831000/client.crt: no such file or directory
E0717 11:50:19.132628    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/bridge-718000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-791000 --alsologtostderr -v=3: (8.432780771s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-791000 -n newest-cni-791000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-791000 -n newest-cni-791000: exit status 7 (68.425354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-791000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-791000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
E0717 11:50:27.800600    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/false-718000/client.crt: no such file or directory
E0717 11:50:33.024798    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/kindnet-718000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-791000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (29.784960385s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-791000 -n newest-cni-791000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-qb2ft" [41a770f8-7955-4ba4-84c8-9fb40dfe7472] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0717 11:50:49.131544    1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/auto-718000/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-779776cb65-qb2ft" [41a770f8-7955-4ba4-84c8-9fb40dfe7472] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004498745s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-791000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-791000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-791000 -n newest-cni-791000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-791000 -n newest-cni-791000: exit status 2 (161.12264ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-791000 -n newest-cni-791000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-791000 -n newest-cni-791000: exit status 2 (161.590745ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-791000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-791000 -n newest-cni-791000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-791000 -n newest-cni-791000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-qb2ft" [41a770f8-7955-4ba4-84c8-9fb40dfe7472] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0039037s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-277000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-277000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-277000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-277000 -n default-k8s-diff-port-277000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-277000 -n default-k8s-diff-port-277000: exit status 2 (159.135864ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-277000 -n default-k8s-diff-port-277000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-277000 -n default-k8s-diff-port-277000: exit status 2 (161.103413ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-277000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-277000 -n default-k8s-diff-port-277000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-277000 -n default-k8s-diff-port-277000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.96s)

                                                
                                    

Test skip (21/338)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-718000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-718000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-718000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-718000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-718000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-718000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-718000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-718000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-718000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-718000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-718000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-718000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-718000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-718000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-718000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-718000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-718000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-718000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-718000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-718000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-718000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-718000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-718000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-718000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-718000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-718000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-718000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-718000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-718000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-718000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-718000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-718000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-718000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718000"

                                                
                                                
----------------------- debugLogs end: cilium-718000 [took: 5.497426891s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-718000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-718000
--- SKIP: TestNetworkPlugins/group/cilium (5.71s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-932000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-932000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
Copied to clipboard